From eb85ecc9c480175bd39a2009212ffa81eaebee7c Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Sat, 9 Feb 2019 10:42:57 +0000 Subject: [PATCH] Version v1.46 --- MANUAL.html | 1295 ++++++++--- MANUAL.md | 1416 ++++++++++-- MANUAL.txt | 1525 ++++++++++--- docs/content/b2.md | 11 +- docs/content/changelog.md | 131 +- docs/content/commands/rclone.md | 576 ++--- docs/content/commands/rclone_about.md | 572 ++--- docs/content/commands/rclone_authorize.md | 572 ++--- docs/content/commands/rclone_cachestats.md | 572 ++--- docs/content/commands/rclone_cat.md | 572 ++--- docs/content/commands/rclone_check.md | 572 ++--- docs/content/commands/rclone_cleanup.md | 572 ++--- docs/content/commands/rclone_config.md | 572 ++--- docs/content/commands/rclone_config_create.md | 581 ++--- docs/content/commands/rclone_config_delete.md | 572 ++--- docs/content/commands/rclone_config_dump.md | 572 ++--- docs/content/commands/rclone_config_edit.md | 572 ++--- docs/content/commands/rclone_config_file.md | 572 ++--- .../commands/rclone_config_password.md | 572 ++--- .../commands/rclone_config_providers.md | 572 ++--- docs/content/commands/rclone_config_show.md | 572 ++--- docs/content/commands/rclone_config_update.md | 577 ++--- docs/content/commands/rclone_copy.md | 583 ++--- docs/content/commands/rclone_copyto.md | 572 ++--- docs/content/commands/rclone_copyurl.md | 572 ++--- docs/content/commands/rclone_cryptcheck.md | 572 ++--- docs/content/commands/rclone_cryptdecode.md | 572 ++--- docs/content/commands/rclone_dbhashsum.md | 572 ++--- docs/content/commands/rclone_dedupe.md | 572 ++--- docs/content/commands/rclone_delete.md | 572 ++--- docs/content/commands/rclone_deletefile.md | 572 ++--- .../commands/rclone_genautocomplete.md | 572 ++--- .../commands/rclone_genautocomplete_bash.md | 572 ++--- .../commands/rclone_genautocomplete_zsh.md | 572 ++--- docs/content/commands/rclone_gendocs.md | 572 ++--- docs/content/commands/rclone_hashsum.md | 572 ++--- docs/content/commands/rclone_link.md | 572 ++--- docs/content/commands/rclone_listremotes.md | 574 ++--- docs/content/commands/rclone_ls.md | 572 ++--- docs/content/commands/rclone_lsd.md | 572 ++--- docs/content/commands/rclone_lsf.md | 572 ++--- docs/content/commands/rclone_lsjson.md | 580 ++--- docs/content/commands/rclone_lsl.md | 572 ++--- docs/content/commands/rclone_md5sum.md | 572 ++--- docs/content/commands/rclone_mkdir.md | 572 ++--- docs/content/commands/rclone_mount.md | 637 +++--- docs/content/commands/rclone_move.md | 577 ++--- docs/content/commands/rclone_moveto.md | 572 ++--- docs/content/commands/rclone_ncdu.md | 572 ++--- docs/content/commands/rclone_obscure.md | 572 ++--- docs/content/commands/rclone_purge.md | 572 ++--- docs/content/commands/rclone_rc.md | 572 ++--- docs/content/commands/rclone_rcat.md | 572 ++--- docs/content/commands/rclone_rcd.md | 574 ++--- docs/content/commands/rclone_rmdir.md | 572 ++--- docs/content/commands/rclone_rmdirs.md | 572 ++--- docs/content/commands/rclone_serve.md | 573 ++--- docs/content/commands/rclone_serve_dlna.md | 495 ++++ docs/content/commands/rclone_serve_ftp.md | 619 ++--- docs/content/commands/rclone_serve_http.md | 633 +++--- docs/content/commands/rclone_serve_restic.md | 572 ++--- docs/content/commands/rclone_serve_webdav.md | 635 +++--- docs/content/commands/rclone_settier.md | 572 ++--- docs/content/commands/rclone_sha1sum.md | 572 ++--- docs/content/commands/rclone_size.md | 572 ++--- docs/content/commands/rclone_sync.md | 572 ++--- docs/content/commands/rclone_touch.md | 572 ++--- docs/content/commands/rclone_tree.md | 572 ++--- docs/content/commands/rclone_version.md | 572 ++--- docs/content/drive.md | 18 + docs/content/googlecloudstorage.md | 12 + docs/content/http.md | 2 + docs/content/hubic.md | 18 + docs/content/jottacloud.md | 20 +- docs/content/local.md | 9 + docs/content/qingstor.md | 5 +- docs/content/rc.md | 28 +- docs/content/s3.md | 53 +- docs/content/swift.md | 72 +- docs/layouts/partials/version.html | 2 +- fs/version.go | 2 +- rclone.1 | 2019 +++++++++++++++-- 82 files changed, 24832 insertions(+), 18620 deletions(-) create mode 100644 docs/content/commands/rclone_serve_dlna.md diff --git a/MANUAL.html b/MANUAL.html index f4ac2df73..f95ca6dd8 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,12 +12,13 @@

Rclone

Logo

Rclone is a command line program to sync files and directories to and from:

-

This returns - jobid - ID of async job to query with job/status

Authentication is required for this call.

operations/copyurl: Copy the URL to the object

This takes the following parameters

@@ -2610,7 +2734,6 @@ rclone rc core/bwlimit rate=off
  • dstFs - a remote name string eg "drive2:" for the destination
  • dstRemote - a path within that remote eg "file2.txt" for the destination
  • -

    This returns - jobid - ID of async job to query with job/status

    Authentication is required for this call.

    operations/purge: Remove a directory or container and all of its contents

    This takes the following parameters

    @@ -2662,6 +2785,13 @@ rclone rc core/bwlimit rate=off

    Repeated as often as required.

    Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this.

    +

    For example:

    +

    This sets DEBUG level logs (-vv)

    +
    rclone rc options/set --json '{"main": {"LogLevel": 8}}'
    +

    And this sets INFO level logs (-v)

    +
    rclone rc options/set --json '{"main": {"LogLevel": 7}}'
    +

    And this sets NOTICE level logs (normal without -v)

    +
    rclone rc options/set --json '{"main": {"LogLevel": 6}}'

    rc/error: This returns an error

    This returns an error with the input as part of its error string. Useful for testing error handling.

    rc/list: List all the registered remote control commands

    @@ -2677,7 +2807,6 @@ rclone rc core/bwlimit rate=off
  • srcFs - a remote name string eg "drive:src" for the source
  • dstFs - a remote name string eg "drive:dst" for the destination
  • -

    This returns - jobid - ID of async job to query with job/status

    See the copy command command for more information on the above.

    Authentication is required for this call.

    sync/move: move a directory from source remote to destination remote

    @@ -2687,7 +2816,6 @@ rclone rc core/bwlimit rate=off
  • dstFs - a remote name string eg "drive:dst" for the destination
  • deleteEmptySrcDirs - delete empty src directories if set
  • -

    This returns - jobid - ID of async job to query with job/status

    See the move command command for more information on the above.

    Authentication is required for this call.

    sync/sync: sync a directory from source remote to destination remote

    @@ -2696,7 +2824,6 @@ rclone rc core/bwlimit rate=off
  • srcFs - a remote name string eg "drive:src" for the source
  • dstFs - a remote name string eg "drive:dst" for the destination
  • -

    This returns - jobid - ID of async job to query with job/status

    See the sync command command for more information on the above.

    Authentication is required for this call.

    vfs/forget: Forget files or directories in the directory cache.

    @@ -3000,8 +3127,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total WebDAV -- -Yes †† +MD5, SHA1 †† +Yes ††† Depends No - @@ -3029,7 +3156,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    To use the verify checksums when transferring between cloud storage systems they must support a common hash type.

    † Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.

    ‡ SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH.

    -

    †† WebDAV supports modtimes when used with Owncloud and Nextcloud only.

    +

    †† WebDAV supports hashes when used with Owncloud and Nextcloud only.

    +

    ††† WebDAV supports modtimes when used with Owncloud and Nextcloud only.

    ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash.

    ModTime

    The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.

    @@ -3308,7 +3436,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes ‡ No #2178 -No +Yes Yandex Disk @@ -3359,6 +3487,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.

    About

    This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash.

    +

    This is also used to return the space used, available for rclone mount.

    If the server can't do About then rclone about will return an error.

    Alias

    The alias remote provides a new name for another remote.

    @@ -3647,6 +3776,7 @@ y/e/d> y

    The S3 backend can be used with a number of different providers:

    @@ -7902,6 +8294,17 @@ y/e/d> y
  • Type: SizeSuffix
  • Default: 5G
  • +

    --hubic-no-chunk

    +

    Don't chunk files during streaming upload.

    +

    When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.

    +

    This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM.

    +

    Rclone will still chunk files bigger than chunk_size when doing normal copy operations.

    +

    Limitations

    This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

    @@ -7983,21 +8386,13 @@ y/e/d> y

    Standard Options

    Here are the standard options specific to jottacloud (JottaCloud).

    --jottacloud-user

    -

    User Name

    +

    User Name:

    -

    --jottacloud-pass

    -

    Password.

    -

    --jottacloud-mountpoint

    The mountpoint to use.

    +

    --jottacloud-upload-resume-limit

    +

    Files bigger than this can be resumed if the upload fail's.

    +

    Limitations

    Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    @@ -8492,11 +8895,22 @@ y/e/d> y

    Limitations

    Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    -

    The largest allowed file size is 10GiB (10,737,418,240 bytes).

    +

    The largest allowed file sizes are 15GB for OneDrive for Business and 35GB for OneDrive Personal (Updated 4 Jan 2019).

    +

    The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.

    OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:. See #2707 for more info.

    +

    An official document about the limitations for different types of OneDrive can be found here.

    Versioning issue

    Every change in OneDrive causes the service to create a new version. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file is using twice the space.

    The copy is the only rclone command affected by this as we copy the file and then afterwards set the modification time to match the source file.

    +

    Note: Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an update to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting:

    +
      +
    1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case you haven't installed this already)
    2. +
    3. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking
    4. +
    5. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this will prompt for your credentials)
    6. +
    7. Set-SPOTenant -EnableMinimumVersionRequirement $False
    8. +
    9. Disconnect-SPOService (to disconnect from the server)
    10. +
    +

    Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.

    User Weropol has found a method to disable versioning on OneDrive

    1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
    2. @@ -8803,6 +9217,37 @@ y/e/d> y
    3. Type: int
    4. Default: 3
    5. +

      --qingstor-upload-cutoff

      +

      Cutoff for switching to chunked upload

      +

      Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.

      + +

      --qingstor-chunk-size

      +

      Chunk size to use for uploading.

      +

      When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.

      +

      Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer.

      +

      If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.

      + +

      --qingstor-upload-concurrency

      +

      Concurrency for multipart uploads.

      +

      This is the number of chunks of the same file that are uploaded concurrently.

      +

      NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though).

      +

      If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

      +

      Swift

      Swift refers to Openstack Object Storage. Commercial implementations of that being:

      @@ -9125,6 +9570,30 @@ rclone lsd myremote:
    6. Type: string
    7. Default: ""
    8. +

      --swift-application-credential-id

      +

      Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)

      + +

      --swift-application-credential-name

      +

      Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)

      + +

      --swift-application-credential-secret

      +

      Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)

      +

      --swift-auth-version

      AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)

      +

      --swift-no-chunk

      +

      Don't chunk files during streaming upload.

      +

      When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.

      +

      This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM.

      +

      Rclone will still chunk files bigger than chunk_size when doing normal copy operations.

      +

      Modified time

      The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

      @@ -9409,8 +9889,10 @@ y/e/d> y
    9. Key file
    10. ssh-agent
    11. -

      Key files should be unencrypted PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa.

      +

      Key files should be PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files are supported.

      If you don't specify pass or key_file then rclone will attempt to contact an ssh-agent.

      +

      You can also specify key_use_agent to force the usage of an ssh-agent. In this case key_file can also be specified to force the usage of a specific key in the ssh-agent.

      +

      Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.

      If you set the --sftp-ask-password option, rclone will prompt for a password when needed and no password has been configured.

      ssh-agent on macOS

      Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg

      @@ -9465,13 +9947,31 @@ y/e/d> y
    12. Default: ""
    13. --sftp-key-file

      -

      Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.

      +

      Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.

      +

      --sftp-key-file-pass

      +

      The passphrase to decrypt the PEM-encoded private key file.

      +

      Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used.

      + +

      --sftp-key-use-agent

      +

      When set forces the usage of the ssh-agent.

      +

      When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid Too many authentication failures for *username* errors when the ssh-agent contains many keys.

      +

      --sftp-use-insecure-cipher

      Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.

      + +

      Translate symlinks to/from regular files with a '.rclonelink' extension

      +

      Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.

      Changelog

      +

      v1.46 - 2019-02-09

      +

      v1.45 - 2018-11-24

      Contact the rclone project

      Forum

      diff --git a/MANUAL.md b/MANUAL.md index 14b102955..516bd63be 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,6 +1,6 @@ % rclone(1) User Manual % Nick Craig-Wood -% Nov 24, 2018 +% Feb 09, 2019 Rclone ====== @@ -9,6 +9,7 @@ Rclone Rclone is a command line program to sync files and directories to and from: +* Alibaba Cloud (Aliyun) Object Storage System (OSS) * Amazon Drive ([See note](/amazonclouddrive/#status)) * Amazon S3 * Backblaze B2 @@ -39,6 +40,7 @@ Rclone is a command line program to sync files and directories to and from: * put.io * QingStor * Rackspace Cloud Files +* Scaleway * SFTP * Wasabi * WebDAV @@ -313,6 +315,17 @@ written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination. +See the [--no-traverse](/docs/#no-traverse) option for controlling +whether rclone lists the destination directory or not. Supplying this +option when copying a small number of files into a large destination +can speed transfers up greatly. + +For example, if you have many files in /path/to/src but only a few of +them change every day, you can to copy all the files which have +changed recently very efficiently like this: + + rclone copy --max-age 24h --no-traverse /path/to/src remote: + **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics @@ -388,6 +401,11 @@ into `dest:path` then delete the original (if no errors on copy) in If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. +See the [--no-traverse](/docs/#no-traverse) option for controlling +whether rclone lists the destination directory or not. Supplying this +option when moving a small number of files into a large destination +can speed transfers up greatly. + **Important**: Since this can cause data loss, test first with the --dry-run flag. @@ -1094,6 +1112,15 @@ you would do: rclone config create myremote swift env_auth true +Note that if the config process would normally ask a question the +default is taken. Each time that happens rclone will print a message +saying how to affect the value taken. + +So for example if you wanted to configure a Google Drive remote but +using remote authorization you would do this: + + rclone config create mydrive drive config_is_local false + ``` rclone config create [ ]* [flags] @@ -1255,6 +1282,11 @@ For example to update the env_auth field of a remote of name myremote you would rclone config update myremote swift env_auth true +If the remote uses oauth the token will be updated, if you don't +require this add an extra parameter thus: + + rclone config update myremote swift env_auth true config_refresh_token false + ``` rclone config update [ ]+ [flags] @@ -1641,7 +1673,7 @@ rclone listremotes [flags] ``` -h, --help help for listremotes - -l, --long Show the type as well as names. + --long Show the type as well as names. ``` ## rclone lsf @@ -1823,7 +1855,13 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. -The time is in RFC3339 format with nanosecond precision. +The time is in RFC3339 format with up to nanosecond precision. The +number of decimal digits in the seconds will depend on the precision +that the remote can hold the times, so if times are accurate to the +nearest millisecond (eg Google Drive) then 3 digits will always be +shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are +accurate to the nearest second (Dropbox, Box, WebDav etc) no digits +will be shown ("2017-05-31T16:15:57+01:00"). The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. @@ -2075,6 +2113,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -2090,6 +2129,11 @@ closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + #### --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -2154,34 +2198,37 @@ rclone mount remote:path /path/to/mountpoint [flags] ### Options ``` - --allow-non-empty Allow mounting over a non-empty directory. - --allow-other Allow access to other users. - --allow-root Allow access to root user. - --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) - --daemon Run mount as a daemon (background mode). - --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). - --debug-fuse Debug the FUSE internals - needs -v. - --default-permissions Makes kernel enforce access control based on the file mode. - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for mount - --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) - --volname string Set the volume name (not supported by all OSes). - --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. + --allow-non-empty Allow mounting over a non-empty directory. + --allow-other Allow access to other users. + --allow-root Allow access to root user. + --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) + --daemon Run mount as a daemon (background mode). + --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). + --debug-fuse Debug the FUSE internals - needs -v. + --default-permissions Makes kernel enforce access control based on the file mode. + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for mount + --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --volname string Set the volume name (not supported by all OSes). + --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. ``` ## rclone moveto @@ -2390,7 +2437,7 @@ Run rclone listening to remote control commands only. ### Synopsis -This runs rclone so that it only listents to remote control commands. +This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. @@ -2463,6 +2510,192 @@ rclone serve [opts] [flags] -h, --help help for serve ``` +## rclone serve dlna + +Serve remote:path over DLNA + +### Synopsis + +rclone serve dlna is a DLNA media server for media stored in a rclone remote. Many +devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN +and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast +packets (SSDP) and will thus only work on LANs. + +Rclone will list all files present in the remote, without filtering based on media formats or +file extensions. Additionally, there is no media transcoding support. This means that some +players might show files that they are not able to play back correctly. + + +### Server options + +Use --addr to specify which IP address and port the server should +listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all +IPs. + + +### Directory Cache + +Using the `--dir-cache-time` flag, you can set how long a +directory should be considered up to date and not refreshed from the +backend. Changes made locally in the mount may appear immediately or +invalidate the cache. However, changes done on the remote will only +be picked up once the cache expires. + +Alternatively, you can send a `SIGHUP` signal to rclone for +it to flush all directory caches, regardless of how old they are. +Assuming only one rclone instance is running, you can reset the cache +like this: + + kill -SIGHUP $(pidof rclone) + +If you configure rclone with a [remote control](/rc) then you can use +rclone rc to flush the whole directory cache: + + rclone rc vfs/forget + +Or individual files or directories: + + rclone rc vfs/forget file=path/to/file dir=path/to/dir + +### File Buffering + +The `--buffer-size` flag determines the amount of memory, +that will be used to buffer data in advance. + +Each open file descriptor will try to keep the specified amount of +data in memory at all times. The buffered data is bound to one file +descriptor and won't be shared between multiple open file descriptors +of the same file. + +This flag is a upper limit for the used memory per file descriptor. +The buffer will only use memory for data that is downloaded but not +not yet read. If the buffer is empty, only a small amount of memory +will be used. +The maximum memory used by rclone for buffering can be up to +`--buffer-size * open files`. + +### File Caching + +These flags control the VFS file caching options. The VFS layer is +used by rclone mount to make a cloud storage system work more like a +normal file system. + +You'll need to enable VFS caching if you want, for example, to read +and write simultaneously to a file. See below for more details. + +Note that the VFS cache works in addition to the cache backend and you +may find that you need one or the other or both. + + --cache-dir string Directory rclone will use for caching. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) + +If run with `-vv` rclone will print the location of the file cache. The +files are stored in the user cache file area which is OS dependent but +can be controlled with `--cache-dir` or setting the appropriate +environment variable. + +The cache has 4 different modes selected by `--vfs-cache-mode`. +The higher the cache mode the more compatible rclone becomes at the +cost of using disk space. + +Note that files are written back to the remote only when they are +closed so if rclone is quit or dies with open files then these won't +get written back to the remote. However they will still be in the on +disk cache. + +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + +#### --vfs-cache-mode off + +In this mode the cache will read directly from the remote and write +directly to the remote without caching anything on disk. + +This will mean some operations are not possible + + * Files can't be opened for both read AND write + * Files opened for write can't be seeked + * Existing files opened for write must have O_TRUNC set + * Files open for read with O_TRUNC will be opened write only + * Files open for write only will behave as if O_TRUNC was supplied + * Open modes O_APPEND, O_TRUNC are ignored + * If an upload fails it can't be retried + +#### --vfs-cache-mode minimal + +This is very similar to "off" except that files opened for read AND +write will be buffered to disks. This means that files opened for +write will be a lot more compatible, but uses the minimal disk space. + +These operations are not possible + + * Files opened for write only can't be seeked + * Existing files opened for write must have O_TRUNC set + * Files opened for write only will ignore O_APPEND, O_TRUNC + * If an upload fails it can't be retried + +#### --vfs-cache-mode writes + +In this mode files opened for read only are still read directly from +the remote, write only and read/write files are buffered to disk +first. + +This mode should support all normal file system operations. + +If an upload fails it will be retried up to --low-level-retries times. + +#### --vfs-cache-mode full + +In this mode all reads and writes are buffered to and from disk. When +a file is opened for read it will be downloaded in its entirety first. + +This may be appropriate for your needs, or you may prefer to look at +the cache backend which does a much more sophisticated job of caching, +including caching directory hierarchies and chunks of files. + +In this mode, unlike the others, when a file is written to the disk, +it will be kept on the disk after it is written to the remote. It +will be purged on a schedule according to `--vfs-cache-max-age`. + +This mode should support all normal file system operations. + +If an upload or download fails it will be retried up to +--low-level-retries times. + + +``` +rclone serve dlna remote:path [flags] +``` + +### Options + +``` + --addr string ip:port or :port to bind the DLNA http server to. (default ":7879") + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for dlna + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) +``` + ## rclone serve ftp Serve remote:path over FTP. @@ -2547,6 +2780,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -2562,6 +2796,11 @@ closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + #### --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -2626,25 +2865,28 @@ rclone serve ftp remote:path [flags] ### Options ``` - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for ftp - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. (empty value allow every password) - --passive-port string Passive port range to use. (default "30000-32000") - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. (default "anonymous") - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for ftp + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. (empty value allow every password) + --passive-port string Passive port range to use. (default "30000-32000") + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. (default "anonymous") + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` ## rclone serve http @@ -2772,6 +3014,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -2787,6 +3030,11 @@ closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + #### --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -2851,32 +3099,35 @@ rclone serve http remote:path [flags] ### Options ``` - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") - --cert string SSL PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for http - --htpasswd string htpasswd file - if not provided no authentication is done - --key string SSL PEM Private key - --max-header-bytes int Maximum size of request header (default 4096) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --realm string realm for authentication (default "rclone") - --server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --cert string SSL PEM key (concatenation of certificate and CA certificate) + --client-ca string Client certificate authority to verify clients with + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for http + --htpasswd string htpasswd file - if not provided no authentication is done + --key string SSL PEM Private key + --max-header-bytes int Maximum size of request header (default 4096) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --realm string realm for authentication (default "rclone") + --server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` ## rclone serve restic @@ -3166,6 +3417,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -3181,6 +3433,11 @@ closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + #### --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -3245,33 +3502,36 @@ rclone serve webdav remote:path [flags] ### Options ``` - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") - --cert string SSL PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --etag-hash string Which hash to use for the ETag, or auto or blank for off - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for webdav - --htpasswd string htpasswd file - if not provided no authentication is done - --key string SSL PEM Private key - --max-header-bytes int Maximum size of request header (default 4096) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --realm string realm for authentication (default "rclone") - --server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --cert string SSL PEM key (concatenation of certificate and CA certificate) + --client-ca string Client certificate authority to verify clients with + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --etag-hash string Which hash to use for the ETag, or auto or blank for off + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for webdav + --htpasswd string htpasswd file - if not provided no authentication is done + --key string SSL PEM Private key + --max-header-bytes int Maximum size of request header (default 4096) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --realm string realm for authentication (default "rclone") + --server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` ## rclone settier @@ -3557,6 +3817,15 @@ Options Rclone has a number of options to control its behaviour. +Options that take parameters can have the values passed in two ways, +`--option=value` or `--option value`. However boolean (true/false) +options behave slightly differently to the other options in that +`--boolean` sets the option to `true` and the absence of the flag sets +it to `false`. It is also possible to specify `--boolean=false` or +`--boolean=true`. Note that `--boolean false` is not valid - this is +parsed as `--boolean` and the `false` is parsed as an extra command +line argument for rclone. + Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid @@ -3680,6 +3949,9 @@ See the [mount](/commands/rclone_mount/#file-buffering) documentation for more d Set to 0 to disable the buffering for the minimum memory usage. +Note that the memory allocation of the buffers is influenced by the +[--use-mmap](#use-mmap) flag. + ### --checkers=N ### The number of checkers to run in parallel. Checkers do the equality @@ -3718,8 +3990,8 @@ Normally the config file is in your home directory as a file called older version). If `$XDG_CONFIG_HOME` is set it will be at `$XDG_CONFIG_HOME/rclone/rclone.conf` -If you run `rclone -h` and look at the help for the `--config` option -you will see where the default location is for you. +If you run `rclone config file` you will see where the default +location is for you. Use this flag to override the config location, eg `rclone --config=".myconfig" .config`. @@ -4132,8 +4404,8 @@ will fall back to the default behaviour and log an error level message to the console. Note: Encrypted destinations are not supported by `--track-renames`. -Note that `--track-renames` uses extra memory to keep track of all -the rename candidates. +Note that `--track-renames` is incompatible with `--no-traverse` and +that it uses extra memory to keep track of all the rename candidates. Note also that `--track-renames` is incompatible with `--delete-before` and will select `--delete-after` instead of @@ -4228,6 +4500,21 @@ This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a `--size-only` check and faster than using `--checksum`. +### --use-mmap ### + +If this flag is set then rclone will use anonymous memory allocated by +mmap on Unix based platforms and VirtualAlloc on Windows for its +transfer buffers (size controlled by `--buffer-size`). Memory +allocated like this does not go on the Go heap and can be returned to +the OS immediately when it is finished with. + +If this flag is not set then rclone will allocate and free the buffers +using the Go memory allocator which may use more memory as memory +pages are returned less aggressively to the OS. + +It is possible this does not work well on all platforms so it is +disabled by default; in the future it may be enabled by default. + ### --use-server-modtime ### Some object-store backends (e.g, Swift, S3) do not preserve file modification @@ -4422,6 +4709,24 @@ This option defaults to `false`. **This should be used only for testing.** +### --no-traverse ### + +The `--no-traverse` flag controls whether the destination file system +is traversed when using the `copy` or `move` commands. +`--no-traverse` is not compatible with `sync` and will be ignored if +you supply it with `sync`. + +If you are only copying a small number of files (or are filtering most +of the files) and/or have a large number of files on the destination +then `--no-traverse` will stop rclone listing the destination and save +time. + +However, if you are copying a large number of files, especially if you +are doing a copy where lots of the files under consideration haven't +changed and won't need copying then you shouldn't use `--no-traverse`. + +See [rclone copy](https://rclone.org/commands/rclone_copy/) for an example of how to use it. + Filtering --------- @@ -4646,18 +4951,17 @@ So first configure rclone on your desktop machine to set up the config file. -Find the config file by running `rclone -h` and looking for the help for the `--config` option +Find the config file by running `rclone config file`, for example ``` -$ rclone -h -[snip] - --config="/home/user/.rclone.conf": Config file. -[snip] +$ rclone config file +Configuration file is stored at: +/home/user/.rclone.conf ``` Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and -place it in the correct place (use `rclone -h` on the remote box to -find out where). +place it in the correct place (use `rclone config file` on the remote +box to find out where). # Filtering, includes and excludes # @@ -5366,7 +5670,7 @@ The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning -of the file to fetch exclisive. +of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file. @@ -5617,9 +5921,6 @@ This takes the following parameters - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination -This returns -- jobid - ID of async job to query with job/status - Authentication is required for this call. ### operations/copyurl: Copy the URL to the object @@ -5697,9 +5998,6 @@ This takes the following parameters - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination -This returns -- jobid - ID of async job to query with job/status - Authentication is required for this call. ### operations/purge: Remove a directory or container and all of its contents @@ -5777,6 +6075,20 @@ Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this. +For example: + +This sets DEBUG level logs (-vv) + + rclone rc options/set --json '{"main": {"LogLevel": 8}}' + +And this sets INFO level logs (-v) + + rclone rc options/set --json '{"main": {"LogLevel": 7}}' + +And this sets NOTICE level logs (normal without -v) + + rclone rc options/set --json '{"main": {"LogLevel": 6}}' + ### rc/error: This returns an error This returns an error with the input as part of its error string. @@ -5808,8 +6120,6 @@ This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination -This returns -- jobid - ID of async job to query with job/status See the [copy command](https://rclone.org/commands/rclone_copy/) command for more information on the above. @@ -5823,8 +6133,6 @@ This takes the following parameters - dstFs - a remote name string eg "drive:dst" for the destination - deleteEmptySrcDirs - delete empty src directories if set -This returns -- jobid - ID of async job to query with job/status See the [move command](https://rclone.org/commands/rclone_move/) command for more information on the above. @@ -5837,8 +6145,6 @@ This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination -This returns -- jobid - ID of async job to query with job/status See the [sync command](https://rclone.org/commands/rclone_sync/) command for more information on the above. @@ -6145,7 +6451,7 @@ Here is an overview of the major features of each cloud storage system. | pCloud | MD5, SHA1 | Yes | No | No | W | | QingStor | MD5 | No | No | No | R/W | | SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - | -| WebDAV | - | Yes †† | Depends | No | - | +| WebDAV | MD5, SHA1 ††| Yes ††† | Depends | No | - | | Yandex Disk | MD5 | Yes | No | No | R/W | | The local filesystem | All | Yes | Depends | No | - | @@ -6166,7 +6472,9 @@ This is an SHA256 sum of all the 4MB block SHA256s. ‡ SFTP supports checksums if the same login has shell access and `md5sum` or `sha1sum` as well as `echo` are in the remote's PATH. -†† WebDAV supports modtimes when used with Owncloud and Nextcloud only. +†† WebDAV supports hashes when used with Owncloud and Nextcloud only. + +††† WebDAV supports modtimes when used with Owncloud and Nextcloud only. ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own @@ -6256,7 +6564,7 @@ operations more efficient. | pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | | QingStor | No | Yes | No | No | No | Yes | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | | SFTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | -| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | | Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | | The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes | @@ -6327,6 +6635,8 @@ on the particular cloud provider. This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash. +This is also used to return the space used, available for `rclone mount`. + If the server can't do `About` then `rclone about` will return an error. @@ -6774,6 +7084,7 @@ Amazon S3 Storage Providers The S3 backend can be used with a number of different providers: * AWS S3 +* Alibaba Cloud (Aliyun) Object Storage System (OSS) * Ceph * DigitalOcean Spaces * Dreamhost @@ -6981,6 +7292,8 @@ Choose a number from below, or type in your own value \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class \ "ONEZONE_IA" + 6 / Glacier storage class + \ "GLACIER" storage_class> 1 Remote config -------------------- @@ -7030,8 +7343,33 @@ The modified time is stored as metadata on the object as ### Multipart uploads ### rclone supports multipart uploads with S3 which means that it can -upload files bigger than 5GB. Note that files uploaded *both* with -multipart upload *and* through crypt remotes do not have MD5 sums. +upload files bigger than 5GB. + +Note that files uploaded *both* with multipart upload *and* through +crypt remotes do not have MD5 sums. + +Rclone switches from single part uploads to multipart uploads at the +point specified by `--s3-upload-cutoff`. This can be a maximum of 5GB +and a minimum of 0 (ie always upload mulipart files). + +The chunk sizes used in the multipart upload are specified by +`--s3-chunk-size` and the number of chunks uploaded concurrently is +specified by `--s3-upload-concurrency`. + +Multipart uploads will use `--transfers` * `--s3-upload-concurrency` * +`--s3-chunk-size` extra memory. Single part uploads to not use extra +memory. + +Single part transfers can be faster than multipart transfers or slower +depending on your latency from S3 - the more latency, the more likely +single part transfers will be faster. + +Increasing `--s3-upload-concurrency` will increase throughput (8 would +be a sensible value) and increasing `--s3-chunk-size` also increases +througput (16M would be sensible). Increasing either of these will +use more memory. The default values are high enough to gain most of +the possible performance without using too much memory. + ### Buckets and Regions ### @@ -7125,9 +7463,9 @@ A proper fix is being worked on in [issue #1824](https://github.com/ncw/rclone/i ### Glacier ### -You can transition objects to glacier storage using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). +You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). The bucket can still be synced or copied into normally, but if rclone -tries to access the data you will see an error like below. +tries to access data from the glacier storage class you will see an error like below. 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file @@ -7137,7 +7475,7 @@ the object(s) in question before using rclone. ### Standard Options -Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)). +Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)). #### --s3-provider @@ -7150,6 +7488,8 @@ Choose your S3 provider. - Examples: - "AWS" - Amazon Web Services (AWS) S3 + - "Alibaba" + - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - "Ceph" - Ceph Object Storage - "DigitalOcean" @@ -7160,6 +7500,8 @@ Choose your S3 provider. - IBM COS S3 - "Minio" - Minio Object Storage + - "Netease" + - Netease Object Storage (NOS) - "Wasabi" - Wasabi Object Storage - "Other" @@ -7231,6 +7573,9 @@ Region to connect to. - "eu-west-2" - EU (London) Region - Needs location constraint eu-west-2. + - "eu-north-1" + - EU (Stockholm) Region + - Needs location constraint eu-north-1. - "eu-central-1" - EU (Frankfurt) Region - Needs location constraint eu-central-1. @@ -7329,9 +7674,9 @@ Specify if using an IBM COS On Premise. - "s3.ams-eu-geo.objectstorage.service.networklayer.com" - EU Cross Region Amsterdam Private Endpoint - "s3.eu-gb.objectstorage.softlayer.net" - - Great Britan Endpoint + - Great Britain Endpoint - "s3.eu-gb.objectstorage.service.networklayer.com" - - Great Britan Private Endpoint + - Great Britain Private Endpoint - "s3.ap-geo.objectstorage.softlayer.net" - APAC Cross Regional Endpoint - "s3.tok-ap-geo.objectstorage.softlayer.net" @@ -7359,6 +7704,54 @@ Specify if using an IBM COS On Premise. #### --s3-endpoint +Endpoint for OSS API. + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Type: string +- Default: "" +- Examples: + - "oss-cn-hangzhou.aliyuncs.com" + - East China 1 (Hangzhou) + - "oss-cn-shanghai.aliyuncs.com" + - East China 2 (Shanghai) + - "oss-cn-qingdao.aliyuncs.com" + - North China 1 (Qingdao) + - "oss-cn-beijing.aliyuncs.com" + - North China 2 (Beijing) + - "oss-cn-zhangjiakou.aliyuncs.com" + - North China 3 (Zhangjiakou) + - "oss-cn-huhehaote.aliyuncs.com" + - North China 5 (Huhehaote) + - "oss-cn-shenzhen.aliyuncs.com" + - South China 1 (Shenzhen) + - "oss-cn-hongkong.aliyuncs.com" + - Hong Kong (Hong Kong) + - "oss-us-west-1.aliyuncs.com" + - US West 1 (Silicon Valley) + - "oss-us-east-1.aliyuncs.com" + - US East 1 (Virginia) + - "oss-ap-southeast-1.aliyuncs.com" + - Southeast Asia Southeast 1 (Singapore) + - "oss-ap-southeast-2.aliyuncs.com" + - Asia Pacific Southeast 2 (Sydney) + - "oss-ap-southeast-3.aliyuncs.com" + - Southeast Asia Southeast 3 (Kuala Lumpur) + - "oss-ap-southeast-5.aliyuncs.com" + - Asia Pacific Southeast 5 (Jakarta) + - "oss-ap-northeast-1.aliyuncs.com" + - Asia Pacific Northeast 1 (Japan) + - "oss-ap-south-1.aliyuncs.com" + - Asia Pacific South 1 (Mumbai) + - "oss-eu-central-1.aliyuncs.com" + - Central Europe 1 (Frankfurt) + - "oss-eu-west-1.aliyuncs.com" + - West Europe (London) + - "oss-me-east-1.aliyuncs.com" + - Middle East 1 (Dubai) + +#### --s3-endpoint + Endpoint for S3 API. Required when using an S3 clone. @@ -7404,6 +7797,8 @@ Used when creating buckets only. - EU (Ireland) Region. - "eu-west-2" - EU (London) Region. + - "eu-north-1" + - EU (Stockholm) Region. - "EU" - EU Region. - "ap-southeast-1" @@ -7446,7 +7841,7 @@ For on-prem COS, do not make a selection from this list, hit enter - "us-east-flex" - US East Region Flex - "us-south-standard" - - US Sout hRegion Standard + - US South Region Standard - "us-south-vault" - US South Region Vault - "us-south-cold" @@ -7462,13 +7857,13 @@ For on-prem COS, do not make a selection from this list, hit enter - "eu-flex" - EU Cross Region Flex - "eu-gb-standard" - - Great Britan Standard + - Great Britain Standard - "eu-gb-vault" - - Great Britan Vault + - Great Britain Vault - "eu-gb-cold" - - Great Britan Cold + - Great Britain Cold - "eu-gb-flex" - - Great Britan Flex + - Great Britain Flex - "ap-standard" - APAC Standard - "ap-vault" @@ -7508,6 +7903,8 @@ Leave blank if not sure. Used when creating buckets only. Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 @@ -7591,17 +7988,73 @@ The storage class to use when storing new objects in S3. - Standard Infrequent Access storage class - "ONEZONE_IA" - One Zone Infrequent Access storage class + - "GLACIER" + - Glacier storage class + +#### --s3-storage-class + +The storage class to use when storing new objects in OSS. + +- Config: storage_class +- Env Var: RCLONE_S3_STORAGE_CLASS +- Type: string +- Default: "" +- Examples: + - "" + - Default + - "STANDARD" + - Standard storage class + - "GLACIER" + - Archive storage mode. + - "STANDARD_IA" + - Infrequent access storage mode. ### Advanced Options -Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)). +Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)). + +#### --s3-bucket-acl + +Canned ACL used when creating buckets. + +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + +Note that this ACL is applied when only when creating buckets. If it +isn't set then "acl" is used instead. + +- Config: bucket_acl +- Env Var: RCLONE_S3_BUCKET_ACL +- Type: string +- Default: "" +- Examples: + - "private" + - Owner gets FULL_CONTROL. No one else has access rights (default). + - "public-read" + - Owner gets FULL_CONTROL. The AllUsers group gets READ access. + - "public-read-write" + - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. + - Granting this on a bucket is generally not recommended. + - "authenticated-read" + - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. + +#### --s3-upload-cutoff + +Cutoff for switching to chunked upload + +Any files larger than this will be uploaded in chunks of chunk_size. +The minimum is 0 and the maximum is 5GB. + +- Config: upload_cutoff +- Env Var: RCLONE_S3_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200M #### --s3-chunk-size Chunk size to use for uploading. -Any files larger than this will be uploaded in chunks of this -size. The default is 5MB. The minimum is 5MB. +When uploading files larger than upload_cutoff they will be uploaded +as multipart uploads using this chunk size. Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer. @@ -7646,7 +8099,7 @@ this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_S3_UPLOAD_CONCURRENCY - Type: int -- Default: 2 +- Default: 4 #### --s3-force-path-style @@ -8067,6 +8520,28 @@ So once set up, for example to copy files into a bucket rclone copy /path/to/files minio:bucket ``` +### Scaleway {#scaleway} + +[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. +Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool. + +Scaleway provides an S3 interface which can be configured for use with rclone like this: + +``` +[scaleway] +type = s3 +env_auth = false +endpoint = s3.nl-ams.scw.cloud +access_key_id = SCWXXXXXXXXXXXXXX +secret_access_key = 1111111-2222-3333-44444-55555555555555 +region = nl-ams +location_constraint = +acl = private +force_path_style = false +server_side_encryption = +storage_class = +``` + ### Wasabi ### [Wasabi](https://wasabi.com) is a cloud-based object storage service for a @@ -8181,30 +8656,41 @@ server_side_encryption = storage_class = ``` -### Aliyun OSS / Netease NOS ### +### Alibaba OSS {#alibaba-oss} -This describes how to set up Aliyun OSS - Netease NOS is the same -except for different endpoints. +Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/) +configuration. First run: -Note this is a pretty standard S3 setup, except for the setting of -`force_path_style = false` in the advanced config. + rclone config + +This will guide you through an interactive setup process. ``` -# rclone config -e/n/d/r/c/s/q> n +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n name> oss Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) +[snip] + 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ "s3" +[snip] Storage> s3 Choose your S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 8 / Any other S3 compatible provider - \ "Other" -provider> other + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun + \ "Alibaba" + 3 / Ceph Object Storage + \ "Ceph" +[snip] +provider> Alibaba Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). @@ -8217,67 +8703,62 @@ env_auth> 1 AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). -access_key_id> xxxxxxxxxxxx +access_key_id> accesskeyid AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxxxxxxxx -Region to connect to. -Leave blank if you are using an S3 clone and you don't have a region. +secret_access_key> secretaccesskey +Endpoint for OSS API. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 1 / Use this if unsure. Will use v4 signatures and an empty region. - \ "" - 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. - \ "other-v2-signature" -region> 1 -Endpoint for S3 API. -Required when using an S3 clone. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value -endpoint> oss-cn-shenzhen.aliyuncs.com -Location constraint - must be set to match the Region. -Leave blank if not sure. Used when creating buckets only. -Enter a string value. Press Enter for the default (""). -location_constraint> -Canned ACL used when creating buckets and/or storing objects in S3. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + 1 / East China 1 (Hangzhou) + \ "oss-cn-hangzhou.aliyuncs.com" + 2 / East China 2 (Shanghai) + \ "oss-cn-shanghai.aliyuncs.com" + 3 / North China 1 (Qingdao) + \ "oss-cn-qingdao.aliyuncs.com" +[snip] +endpoint> 1 +Canned ACL used when creating buckets and storing or copying objects. + +Note that this ACL is applied when server side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. + \ "public-read" + / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. +[snip] acl> 1 +The storage class to use when storing new objects in OSS. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Standard storage class + \ "STANDARD" + 3 / Archive storage mode. + \ "GLACIER" + 4 / Infrequent access storage mode. + \ "STANDARD_IA" +storage_class> 1 Edit advanced config? (y/n) y) Yes n) No -y/n> y -Chunk size to use for uploading -Enter a size with suffix k,M,G,T. Press Enter for the default ("5M"). -chunk_size> -Don't store MD5 checksum with object metadata -Enter a boolean value (true or false). Press Enter for the default ("false"). -disable_checksum> -An AWS session token -Enter a string value. Press Enter for the default (""). -session_token> -Concurrency for multipart uploads. -Enter a signed integer. Press Enter for the default ("2"). -upload_concurrency> -If true use path style access if false use virtual hosted style. -Some providers (eg Aliyun OSS or Netease COS) require this. -Enter a boolean value (true or false). Press Enter for the default ("true"). -force_path_style> false +y/n> n Remote config -------------------- [oss] type = s3 -provider = Other +provider = Alibaba env_auth = false -access_key_id = xxxxxxxxx -secret_access_key = xxxxxxxxxxxxx -endpoint = oss-cn-shenzhen.aliyuncs.com +access_key_id = accesskeyid +secret_access_key = secretaccesskey +endpoint = oss-cn-hangzhou.aliyuncs.com acl = private -force_path_style = false +storage_class = Standard -------------------- y) Yes this is OK e) Edit this remote @@ -8285,6 +8766,12 @@ d) Delete this remote y/e/d> y ``` +### Netease NOS ### + +For Netease NOS configure as per the configurator `rclone config` +setting the provider `Netease`. This will automatically set +`force_path_style = false` which is necessary for it to run properly. + Backblaze B2 ---------------------------------------- @@ -8297,9 +8784,11 @@ Here is an example of making a b2 configuration. First run rclone config -This will guide you through an interactive setup process. You will -need your account number (a short hex number) and key (a long hex -number) which you can get from the b2 control panel. +This will guide you through an interactive setup process. To authenticate +you will either need your Account ID (a short hex number) and Master +Application Key (a long hex number) OR an Application Key, which is the +recommended method. See below for further details on generating and using +an Application Key. ``` No remotes found - make a new one @@ -8379,13 +8868,14 @@ excess files in the bucket. B2 supports multiple [Application Keys for different access permission to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html). -You can use these with rclone too. +You can use these with rclone too; you will need to use rclone version 1.43 +or later. Follow Backblaze's docs to create an Application Key with the required -permission and add the `Application Key ID` as the `account` and the +permission and add the `applicationKeyId` as the `account` and the `Application Key` itself as the `key`. -Note that you must put the Application Key ID as the `account` - you +Note that you must put the _applicationKeyId_ as the `account` – you can't use the master Account ID. If you try then B2 will return 401 errors. @@ -8462,8 +8952,8 @@ versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg `rclone cleanup remote:bucket/path/to/stuff`. -Note that `cleanup` does not remove partially uploaded files -from the bucket. +Note that `cleanup` will remove partially uploaded files from the bucket +if they are more than a day old. When you `purge` a bucket, the current and the old versions will be deleted then the bucket will be deleted. @@ -8671,13 +9161,22 @@ Upload chunk size. Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the -minimim size. +minimum size. - Config: chunk_size - Env Var: RCLONE_B2_CHUNK_SIZE - Type: SizeSuffix - Default: 96M +#### --b2-disable-checksum + +Disable checksums for large (> upload cutoff) files + +- Config: disable_checksum +- Env Var: RCLONE_B2_DISABLE_CHECKSUM +- Type: bool +- Default: false + Box @@ -8788,6 +9287,17 @@ To copy a local directory to an Box directory called backup rclone copy /home/source remote:backup +### Using rclone with an Enterprise account with SSO ### + +If you have an "Enterprise" account type with Box with single sign on +(SSO), you need to create a password to use Box with rclone. This can +be done at your Enterprise Box account by going to Settings, "Account" +Tab, and then set the password in the "Authentication" field. + +Once you have done this, you can setup your Enterprise Box account +using the same procedure detailed above in the, using the password you +have just set. + ### Invalid refresh token ### According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): @@ -10418,6 +10928,9 @@ Note that `--bind` isn't supported. FTP could support server side move but doesn't yet. +Note that the ftp backend does not support the `ftp_proxy` environment +variable yet. + Google Cloud Storage ------------------------------------------------- @@ -10761,16 +11274,26 @@ Location for the newly created buckets. - Multi-regional location for United States. - "asia-east1" - Taiwan. + - "asia-east2" + - Hong Kong. - "asia-northeast1" - Tokyo. + - "asia-south1" + - Mumbai. - "asia-southeast1" - Singapore. - "australia-southeast1" - Sydney. + - "europe-north1" + - Finland. - "europe-west1" - Belgium. - "europe-west2" - London. + - "europe-west3" + - Frankfurt. + - "europe-west4" + - Netherlands. - "us-central1" - Iowa. - "us-east1" @@ -10779,6 +11302,8 @@ Location for the newly created buckets. - Northern Virginia. - "us-west1" - Oregon. + - "us-west2" + - California. #### --gcs-storage-class @@ -11587,6 +12112,24 @@ If Object's are greater, use drive v2 API to download. - Type: SizeSuffix - Default: off +#### --drive-pacer-min-sleep + +Minimum time to sleep between API calls. + +- Config: pacer_min_sleep +- Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP +- Type: Duration +- Default: 100ms + +#### --drive-pacer-burst + +Number of API calls to allow without sleeping. + +- Config: pacer_burst +- Env Var: RCLONE_DRIVE_PACER_BURST +- Type: int +- Default: 100 + ### Limitations ### @@ -11645,9 +12188,7 @@ second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. -However you might find you get better performance making your own -client_id if you are a heavy user. Or you may not depending on exactly -how Google have been raising rclone's rate limit. +It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. Here is how to create your own Google Drive client ID for rclone: @@ -11812,6 +12353,8 @@ URL of http host to connect to - Examples: - "https://example.com" - Connect to example.com + - "https://user:pass@example.com" + - Connect to example.com using a username and password @@ -11980,6 +12523,24 @@ default for this is 5GB which is its maximum value. - Type: SizeSuffix - Default: 5G +#### --hubic-no-chunk + +Don't chunk files during streaming upload. + +When doing streaming uploads (eg using rcat or mount) setting this +flag will cause the swift backend to not upload chunked files. + +This will limit the maximum upload size to 5GB. However non chunked +files are easier to deal with and have an MD5SUM. + +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. + +- Config: no_chunk +- Env Var: RCLONE_HUBIC_NO_CHUNK +- Type: bool +- Default: false + ### Limitations ### @@ -12119,22 +12680,13 @@ Here are the standard options specific to jottacloud (JottaCloud). #### --jottacloud-user -User Name +User Name: - Config: user - Env Var: RCLONE_JOTTACLOUD_USER - Type: string - Default: "" -#### --jottacloud-pass - -Password. - -- Config: pass -- Env Var: RCLONE_JOTTACLOUD_PASS -- Type: string -- Default: "" - #### --jottacloud-mountpoint The mountpoint to use. @@ -12181,6 +12733,15 @@ Default is false, meaning link command will create or retrieve public link. - Type: bool - Default: false +#### --jottacloud-upload-resume-limit + +Files bigger than this can be resumed if the upload fail's. + +- Config: upload_resume_limit +- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT +- Type: SizeSuffix +- Default: 10M + ### Limitations ### @@ -12868,13 +13429,17 @@ platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a `?` in it will be mapped to `?` instead. -The largest allowed file size is 10GiB (10,737,418,240 bytes). +The largest allowed file sizes are 15GB for OneDrive for Business and 35GB for OneDrive Personal (Updated 4 Jan 2019). + +The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like `couldn’t list files: UnknownError:`. See [#2707](https://github.com/ncw/rclone/issues/2707) for more info. +An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). + ### Versioning issue ### Every change in OneDrive causes the service to create a new version. @@ -12886,6 +13451,16 @@ The `copy` is the only rclone command affected by this as we copy the file and then afterwards set the modification time to match the source file. +**Note**: Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an [update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting: + +1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already) +1. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking` +1. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials) +1. `Set-SPOTenant -EnableMinimumVersionRequirement $False` +1. `Disconnect-SPOService` (to disconnect from the server) + +*Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.* + User [Weropol](https://github.com/Weropol) has found a method to disable versioning on OneDrive @@ -13267,6 +13842,55 @@ Number of connection retries. - Type: int - Default: 3 +#### --qingstor-upload-cutoff + +Cutoff for switching to chunked upload + +Any files larger than this will be uploaded in chunks of chunk_size. +The minimum is 0 and the maximum is 5GB. + +- Config: upload_cutoff +- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200M + +#### --qingstor-chunk-size + +Chunk size to use for uploading. + +When uploading files larger than upload_cutoff they will be uploaded +as multipart uploads using this chunk size. + +Note that "--qingstor-upload-concurrency" chunks of this size are buffered +in memory per transfer. + +If you are transferring large files over high speed links and you have +enough memory, then increasing this will speed up the transfers. + +- Config: chunk_size +- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE +- Type: SizeSuffix +- Default: 4M + +#### --qingstor-upload-concurrency + +Concurrency for multipart uploads. + +This is the number of chunks of the same file that are uploaded +concurrently. + +NB if you set this to > 1 then the checksums of multpart uploads +become corrupted (the uploads themselves are not corrupted though). + +If you are uploading small numbers of large file over high speed link +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. + +- Config: upload_concurrency +- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY +- Type: int +- Default: 1 + Swift @@ -13657,6 +14281,33 @@ Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - Type: string - Default: "" +#### --swift-application-credential-id + +Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + +- Config: application_credential_id +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID +- Type: string +- Default: "" + +#### --swift-application-credential-name + +Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + +- Config: application_credential_name +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME +- Type: string +- Default: "" + +#### --swift-application-credential-secret + +Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + +- Config: application_credential_secret +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET +- Type: string +- Default: "" + #### --swift-auth-version AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) @@ -13719,6 +14370,24 @@ default for this is 5GB which is its maximum value. - Type: SizeSuffix - Default: 5G +#### --swift-no-chunk + +Don't chunk files during streaming upload. + +When doing streaming uploads (eg using rcat or mount) setting this +flag will cause the swift backend to not upload chunked files. + +This will limit the maximum upload size to 5GB. However non chunked +files are easier to deal with and have an MD5SUM. + +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. + +- Config: no_chunk +- Env Var: RCLONE_SWIFT_NO_CHUNK +- Type: bool +- Default: false + ### Modified time ### @@ -14032,11 +14701,15 @@ The SFTP remote supports three authentication methods: * Key file * ssh-agent -Key files should be unencrypted PEM-encoded private key files. For -instance `/home/$USER/.ssh/id_rsa`. +Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`. +Only unencrypted OpenSSH or PEM encrypted files are supported. -If you don't specify `pass` or `key_file` then rclone will attempt to -contact an ssh-agent. +If you don't specify `pass` or `key_file` then rclone will attempt to contact an ssh-agent. + +You can also specify `key_use_agent` to force the usage of an ssh-agent. In this case +`key_file` can also be specified to force the usage of a specific key in the ssh-agent. + +Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. If you set the `--sftp-ask-password` option, rclone will prompt for a password when needed and no password has been configured. @@ -14112,13 +14785,38 @@ SSH password, leave blank to use ssh-agent. #### --sftp-key-file -Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. +Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. - Config: key_file - Env Var: RCLONE_SFTP_KEY_FILE - Type: string - Default: "" +#### --sftp-key-file-pass + +The passphrase to decrypt the PEM-encoded private key file. + +Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys +in the new OpenSSH format can't be used. + +- Config: key_file_pass +- Env Var: RCLONE_SFTP_KEY_FILE_PASS +- Type: string +- Default: "" + +#### --sftp-key-use-agent + +When set forces the usage of the ssh-agent. + +When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is +requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors +when the ssh-agent contains many keys. + +- Config: key_use_agent +- Env Var: RCLONE_SFTP_KEY_USE_AGENT +- Type: bool +- Default: false + #### --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. @@ -14469,7 +15167,11 @@ To copy a local directory to an WebDAV directory called backup Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times. -Hashes are not supported. +Likewise plain WebDAV does not support hashes, however when used with +Owncloud or Nexcloud rclone will support SHA1 and MD5 hashes. +Depending on the exact version of Owncloud or Nextcloud hashes may +appear on all objects, or only on objects which had a hash uploaded +with them. ### Standard Options @@ -14623,7 +15325,7 @@ pass = encryptedpassword ### dCache ### -dCache is a storage system with WebDAV doors that support, beside basic and x509, +[dCache](https://www.dcache.org/) is a storage system with WebDAV doors that support, beside basic and x509, authentication with [Macaroons](https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf) (bearer tokens). Configure as normal using the `other` type. Don't enter a username or @@ -14641,7 +15343,7 @@ pass = bearer_token = your-macaroon ``` -There is a [script](https://github.com/onnozweers/dcache-scripts/blob/master/get-share-link) that +There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. Yandex Disk @@ -14767,6 +15469,19 @@ does not take any path arguments. To view your current quota you can use the `rclone about remote:` command which will display your usage limit (quota) and the current usage. +### Limitations ### + +When uploading very large files (bigger than about 5GB) you will need +to increase the `--timeout` parameter. This is because Yandex pauses +(perhaps to calculate the MD5SUM for the entire file) before returning +confirmation that the file has been uploaded. The default handling of +timeouts in rclone is to assume a 5 minute pause is an error and close +the connection - you'll see `net/http: timeout awaiting response +headers` errors in the logs if this is happening. Setting the timeout +to twice the max size of file in GB should be enough, so if you want +to upload a 30GB file set a timeout of `2 * 30 = 60m`, that is +`--timeout 60m`. + ### Standard Options @@ -14885,7 +15600,8 @@ Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply `--copy-links` or `-L` then rclone will follow the -symlink and copy the pointed to file or directory. +symlink and copy the pointed to file or directory. Note that this +flag is incompatible with `-links` / `-l`. This flag applies to all commands. @@ -14920,6 +15636,75 @@ $ rclone -L ls /tmp/a 6 b/one ``` +#### --links, -l + +Normally rclone will ignore symlinks or junction points (which behave +like symlinks under Windows). + +If you supply this flag then rclone will copy symbolic links from the local storage, +and store them as text files, with a '.rclonelink' suffix in the remote storage. + +The text file will contain the target of the symbolic link (see example). + +This flag applies to all commands. + +For example, supposing you have a directory structure like this + +``` +$ tree /tmp/a +/tmp/a +├── file1 -> ./file4 +└── file2 -> /home/user/file3 +``` + +Copying the entire directory with '-l' + +``` +$ rclone copyto -l /tmp/a/file1 remote:/tmp/a/ +``` + +The remote files are created with a '.rclonelink' suffix + +``` +$ rclone ls remote:/tmp/a + 5 file1.rclonelink + 14 file2.rclonelink +``` + +The remote files will contain the target of the symbolic links + +``` +$ rclone cat remote:/tmp/a/file1.rclonelink +./file4 + +$ rclone cat remote:/tmp/a/file2.rclonelink +/home/user/file3 +``` + +Copying them back with '-l' + +``` +$ rclone copyto -l remote:/tmp/a/ /tmp/b/ + +$ tree /tmp/b +/tmp/b +├── file1 -> ./file4 +└── file2 -> /home/user/file3 +``` + +However, if copied back without '-l' + +``` +$ rclone copyto remote:/tmp/a/ /tmp/b/ + +$ tree /tmp/b +/tmp/b +├── file1.rclonelink +└── file2.rclonelink +```` + +Note that this flag is incompatible with `-copy-links` / `-L`. + ### Restricting filesystems with --one-file-system Normally rclone will recurse through filesystems as mounted. @@ -14993,6 +15778,15 @@ Follow symlinks and copy the pointed to item. - Type: bool - Default: false +#### --links + +Translate symlinks to/from regular files with a '.rclonelink' extension + +- Config: links +- Env Var: RCLONE_LOCAL_LINKS +- Type: bool +- Default: false + #### --skip-links Don't warn about skipped symlinks. @@ -15047,6 +15841,135 @@ Don't cross filesystem boundaries (unix/macOS only). # Changelog +## v1.46 - 2019-02-09 + +* New backends + * Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood) +* New commands + * serve dlna: serves a remove via DLNA for the local network (nicolov) +* New Features + * copy, move: Restore deprecated `--no-traverse` flag (Nick Craig-Wood) + * This is useful for when transferring a small number of files into a large destination + * genautocomplete: Add remote path completion for bash completion (Christopher Peterson & Danil Semelenov) + * Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood) + * Buffer recycling library to replace sync.Pool + * Optionally use memory mapped memory for better memory shrinking + * Enable with `--use-mmap` if having memory problems - not default yet + * Parallelise reading of files specified by `--files-from` (Nick Craig-Wood) + * check: Add stats showing total files matched. (Dario Guzik) + * Allow rename/delete open files under Windows (Nick Craig-Wood) + * lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood) + * Add cookie support with cmdline switch `--use-cookies` for all HTTP based remotes (qip) + * Warn if `--checksum` is set but there are no hashes available (Nick Craig-Wood) + * Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood) + * Improve error reporting for too many/few arguments in commands (Nick Craig-Wood) + * listremotes: Remove `-l` short flag as it conflicts with the new global flag (weetmuts) + * Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood) +* Bug Fixes + * Fix layout of stats (Nick Craig-Wood) + * Fix `--progress` crash under Windows Jenkins (Nick Craig-Wood) + * Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly) + * copyurl: Fix checking of `--dry-run` (Denis Skovpen) +* Mount + * Check that mountpoint and local directory to mount don't overlap (Nick Craig-Wood) + * Fix mount size under 32 bit Windows (Nick Craig-Wood) +* VFS + * Implement renaming of directories for backends without DirMove (Nick Craig-Wood) + * now all backends except b2 support renaming directories + * Implement `--vfs-cache-max-size` to limit the total size of the cache (Nick Craig-Wood) + * Add `--dir-perms` and `--file-perms` flags to set default permissions (Nick Craig-Wood) + * Fix deadlock on concurrent operations on a directory (Nick Craig-Wood) + * Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood) + * Fix renaming/deleting open files with cache mode "writes" under Windows (Nick Craig-Wood) + * Fix panic on rename with `--dry-run` set (Nick Craig-Wood) + * Fix vfs/refresh with recurse=true needing the `--fast-list` flag +* Local + * Add support for `-l`/`--links` (symbolic link translation) (yair@unicorn) + * this works by showing links as `link.rclonelink` - see local backend docs for more info + * this errors if used with `-L`/`--copy-links` + * Fix renaming/deleting open files on Windows (Nick Craig-Wood) +* Crypt + * Check for maximum length before decrypting filename to fix panic (Garry McNulty) +* Azure Blob + * Allow building azureblob backend on *BSD (themylogin) + * Use the rclone HTTP client to support `--dump headers`, `--tpslimit` etc (Nick Craig-Wood) + * Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood) + * Ignore directory markers (Nick Craig-Wood) + * Stop Mkdir attempting to create existing containers (Nick Craig-Wood) +* B2 + * cleanup: will remove unfinished large files >24hrs old (Garry McNulty) + * For a bucket limited application key check the bucket name (Nick Craig-Wood) + * before this, rclone would use the authorised bucket regardless of what you put on the command line + * Added `--b2-disable-checksum` flag (Wojciech Smigielski) + * this enables large files to be uploaded without a SHA-1 hash for speed reasons +* Drive + * Set default pacer to 100ms for 10 tps (Nick Craig-Wood) + * This fits the Google defaults much better and reduces the 403 errors massively + * Add `--drive-pacer-min-sleep` and `--drive-pacer-burst` to control the pacer + * Improve ChangeNotify support for items with multiple parents (Fabian Möller) + * Fix ListR for items with multiple parents - this fixes oddities with `vfs/refresh` (Fabian Möller) + * Fix using `--drive-impersonate` and appfolders (Nick Craig-Wood) + * Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood) +* Dropbox + * Retry-After support for Dropbox backend (Mathieu Carbou) +* FTP + * Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood) + * helps with indefinite hangs on some FTP servers +* Google Cloud Storage + * Update google cloud storage endpoints (weetmuts) +* HTTP + * Add an example with username and password which is supported but wasn't documented (Nick Craig-Wood) + * Fix backend with `--files-from` and non-existent files (Nick Craig-Wood) +* Hubic + * Make error message more informative if authentication fails (Nick Craig-Wood) +* Jottacloud + * Resume and deduplication support (Oliver Heyme) + * Use token auth for all API requests Don't store password anymore (Sebastian Bünger) + * Add support for 2-factor authentification (Sebastian Bünger) +* Mega + * Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood) + * Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood) + * Add new error codes for better error reporting (Nick Craig-Wood) +* Onedrive + * Fix broken support for "shared with me" folders (Alex Chen) + * Fix root ID not normalised (Cnly) + * Return err instead of panic on unknown-sized uploads (Cnly) +* Qingstor + * Fix go routine leak on multipart upload errors (Nick Craig-Wood) + * Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood) + * Default `--qingstor-upload-concurrency` to 1 to work around bug (Nick Craig-Wood) +* S3 + * Implement `--s3-upload-cutoff` for single part uploads below this (Nick Craig-Wood) + * Change `--s3-upload-concurrency` default to 4 to increase perfomance (Nick Craig-Wood) + * Add `--s3-bucket-acl` to control bucket ACL (Nick Craig-Wood) + * Auto detect region for buckets on operation failure (Nick Craig-Wood) + * Add GLACIER storage class (William Cocker) + * Add Scaleway to s3 documentation (Rémy Léone) + * Add AWS endpoint eu-north-1 (weetmuts) +* SFTP + * Add support for PEM encrypted private keys (Fabian Möller) + * Add option to force the usage of an ssh-agent (Fabian Möller) + * Perform environment variable expansion on key-file (Fabian Möller) + * Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood) + * Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood) + * Fix error on dangling symlinks (Nick Craig-Wood) +* Swift + * Add `--swift-no-chunk` to disable segmented uploads in rcat/mount (Nick Craig-Wood) + * Introduce application credential auth support (kayrus) + * Fix memory usage by slimming Object (Nick Craig-Wood) + * Fix extra requests on upload (Nick Craig-Wood) + * Fix reauth on big files (Nick Craig-Wood) +* Union + * Fix poll-interval not working (Nick Craig-Wood) +* WebDAV + * Support About which means rclone mount will show the correct disk size (Nick Craig-Wood) + * Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood) + * Fail soft on time parsing errors (Nick Craig-Wood) + * Fix infinite loop on failed directory creation (Nick Craig-Wood) + * Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood) + * Fix upload of 0 length files on some servers (Nick Craig-Wood) + * Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood) + ## v1.45 - 2018-11-24 * New backends @@ -16645,8 +17568,8 @@ work on all the remote storage systems. ### Can I copy the config from one machine to another ### Sure! Rclone stores all of its config in a single file. If you want -to find this file, the simplest way is to run `rclone -h` and look at -the help for the `--config` flag which will tell you where it is. +to find this file, run `rclone config file` which will tell you where +it is. See the [remote setup docs](https://rclone.org/remote_setup/) for more info. @@ -16727,8 +17650,6 @@ In general the variables are called `http_proxy` (for services reached over `http`) and `https_proxy` (for services reached over `https`). Most public services will be using `https`, but you may wish to set both. -If you ever use `FTP` then you would need to set `ftp_proxy`. - The content of the variable is `protocol://server:port`. The protocol value is the one used to talk to the proxy server, itself, and is commonly either `http` or `socks5`. @@ -16752,6 +17673,8 @@ e.g. export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy +Note that the ftp backend does not support `ftp_proxy` yet. + ### Rclone gives x509: failed to load system roots and no roots provided error ### This means that `rclone` can't file the SSL root certificates. Likely @@ -16997,7 +17920,7 @@ Contributors * Michael P. Dubner * Antoine GIRARD * Mateusz Piotrowski - * Animosity022 + * Animosity022 * Peter Baumgartner * Craig Rachel * Michael G. Noll @@ -17060,6 +17983,25 @@ Contributors * Peter Kaminski * Henry Ptasinski * Alexander + * Garry McNulty + * Mathieu Carbou + * Mark Otway + * William Cocker <37018962+WilliamCocker@users.noreply.github.com> + * François Leurent <131.js@cloudyks.org> + * Arkadius Stefanski + * Jay + * andrea rota + * nicolov + * Dario Guzik + * qip + * yair@unicorn + * Matt Robinson + * kayrus + * Rémy Léone + * Wojciech Smigielski + * weetmuts + * Jonathan + * James Carpenter # Contact the rclone project # diff --git a/MANUAL.txt b/MANUAL.txt index 4e1545f59..2730f8c03 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Nov 24, 2018 +Feb 09, 2019 @@ -12,6 +12,7 @@ RCLONE Rclone is a command line program to sync files and directories to and from: +- Alibaba Cloud (Aliyun) Object Storage System (OSS) - Amazon Drive (See note) - Amazon S3 - Backblaze B2 @@ -42,6 +43,7 @@ from: - put.io - QingStor - Rackspace Cloud Files +- Scaleway - SFTP - Wasabi - WebDAV @@ -318,6 +320,16 @@ written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination. +See the --no-traverse option for controlling whether rclone lists the +destination directory or not. Supplying this option when copying a small +number of files into a large destination can speed transfers up greatly. + +For example, if you have many files in /path/to/src but only a few of +them change every day, you can to copy all the files which have changed +recently very efficiently like this: + + rclone copy --max-age 24h --no-traverse /path/to/src remote: + NOTE: Use the -P/--progress flag to view real-time transfer statistics rclone copy source:path dest:path [flags] @@ -383,6 +395,10 @@ then delete the original (if no errors on copy) in source:path. If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. +See the --no-traverse option for controlling whether rclone lists the +destination directory or not. Supplying this option when moving a small +number of files into a large destination can speed transfers up greatly. + IMPORTANT: Since this can cause data loss, test first with the --dry-run flag. @@ -1008,6 +1024,15 @@ you would do: rclone config create myremote swift env_auth true +Note that if the config process would normally ask a question the +default is taken. Each time that happens rclone will print a message +saying how to affect the value taken. + +So for example if you wanted to configure a Google Drive remote but +using remote authorization you would do this: + + rclone config create mydrive drive config_is_local false + rclone config create [ ]* [flags] Options @@ -1141,6 +1166,11 @@ you would do: rclone config update myremote swift env_auth true +If the remote uses oauth the token will be updated, if you don't require +this add an extra parameter thus: + + rclone config update myremote swift env_auth true config_refresh_token false + rclone config update [ ]+ [flags] Options @@ -1459,7 +1489,7 @@ When uses with the -l flag it lists the types too. Options -h, --help help for listremotes - -l, --long Show the type as well as names. + --long Show the type as well as names. rclone lsf @@ -1627,7 +1657,13 @@ Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. -The time is in RFC3339 format with nanosecond precision. +The time is in RFC3339 format with up to nanosecond precision. The +number of decimal digits in the seconds will depend on the precision +that the remote can hold the times, so if times are accurate to the +nearest millisecond (eg Google Drive) then 3 digits will always be shown +("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to +the nearest second (Dropbox, Box, WebDav etc) no digits will be shown +("2017-05-31T16:15:57+01:00"). The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. @@ -1870,6 +1906,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -1885,6 +1922,11 @@ so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be evicted +from the cache. + --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -1944,34 +1986,37 @@ If an upload or download fails it will be retried up to Options - --allow-non-empty Allow mounting over a non-empty directory. - --allow-other Allow access to other users. - --allow-root Allow access to root user. - --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) - --daemon Run mount as a daemon (background mode). - --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). - --debug-fuse Debug the FUSE internals - needs -v. - --default-permissions Makes kernel enforce access control based on the file mode. - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for mount - --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) - --volname string Set the volume name (not supported by all OSes). - --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. + --allow-non-empty Allow mounting over a non-empty directory. + --allow-other Allow access to other users. + --allow-root Allow access to root user. + --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) + --daemon Run mount as a daemon (background mode). + --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). + --debug-fuse Debug the FUSE internals - needs -v. + --default-permissions Makes kernel enforce access control based on the file mode. + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for mount + --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --volname string Set the volume name (not supported by all OSes). + --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. rclone moveto @@ -2152,7 +2197,7 @@ Run rclone listening to remote control commands only. Synopsis -This runs rclone so that it only listents to remote control commands. +This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. @@ -2213,6 +2258,184 @@ Options -h, --help help for serve +rclone serve dlna + +Serve remote:path over DLNA + +Synopsis + +rclone serve dlna is a DLNA media server for media stored in a rclone +remote. Many devices, such as the Xbox and PlayStation, can +automatically discover this server in the LAN and play audio/video from +it. VLC is also supported. Service discovery uses UDP multicast packets +(SSDP) and will thus only work on LANs. + +Rclone will list all files present in the remote, without filtering +based on media formats or file extensions. Additionally, there is no +media transcoding support. This means that some players might show files +that they are not able to play back correctly. + +Server options + +Use --addr to specify which IP address and port the server should listen +on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. + +Directory Cache + +Using the --dir-cache-time flag, you can set how long a directory should +be considered up to date and not refreshed from the backend. Changes +made locally in the mount may appear immediately or invalidate the +cache. However, changes done on the remote will only be picked up once +the cache expires. + +Alternatively, you can send a SIGHUP signal to rclone for it to flush +all directory caches, regardless of how old they are. Assuming only one +rclone instance is running, you can reset the cache like this: + + kill -SIGHUP $(pidof rclone) + +If you configure rclone with a remote control then you can use rclone rc +to flush the whole directory cache: + + rclone rc vfs/forget + +Or individual files or directories: + + rclone rc vfs/forget file=path/to/file dir=path/to/dir + +File Buffering + +The --buffer-size flag determines the amount of memory, that will be +used to buffer data in advance. + +Each open file descriptor will try to keep the specified amount of data +in memory at all times. The buffered data is bound to one file +descriptor and won't be shared between multiple open file descriptors of +the same file. + +This flag is a upper limit for the used memory per file descriptor. The +buffer will only use memory for data that is downloaded but not not yet +read. If the buffer is empty, only a small amount of memory will be +used. The maximum memory used by rclone for buffering can be up to +--buffer-size * open files. + +File Caching + +These flags control the VFS file caching options. The VFS layer is used +by rclone mount to make a cloud storage system work more like a normal +file system. + +You'll need to enable VFS caching if you want, for example, to read and +write simultaneously to a file. See below for more details. + +Note that the VFS cache works in addition to the cache backend and you +may find that you need one or the other or both. + + --cache-dir string Directory rclone will use for caching. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) + +If run with -vv rclone will print the location of the file cache. The +files are stored in the user cache file area which is OS dependent but +can be controlled with --cache-dir or setting the appropriate +environment variable. + +The cache has 4 different modes selected by --vfs-cache-mode. The higher +the cache mode the more compatible rclone becomes at the cost of using +disk space. + +Note that files are written back to the remote only when they are closed +so if rclone is quit or dies with open files then these won't get +written back to the remote. However they will still be in the on disk +cache. + +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be evicted +from the cache. + +--vfs-cache-mode off + +In this mode the cache will read directly from the remote and write +directly to the remote without caching anything on disk. + +This will mean some operations are not possible + +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried + +--vfs-cache-mode minimal + +This is very similar to "off" except that files opened for read AND +write will be buffered to disks. This means that files opened for write +will be a lot more compatible, but uses the minimal disk space. + +These operations are not possible + +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried + +--vfs-cache-mode writes + +In this mode files opened for read only are still read directly from the +remote, write only and read/write files are buffered to disk first. + +This mode should support all normal file system operations. + +If an upload fails it will be retried up to --low-level-retries times. + +--vfs-cache-mode full + +In this mode all reads and writes are buffered to and from disk. When a +file is opened for read it will be downloaded in its entirety first. + +This may be appropriate for your needs, or you may prefer to look at the +cache backend which does a much more sophisticated job of caching, +including caching directory hierarchies and chunks of files. + +In this mode, unlike the others, when a file is written to the disk, it +will be kept on the disk after it is written to the remote. It will be +purged on a schedule according to --vfs-cache-max-age. + +This mode should support all normal file system operations. + +If an upload or download fails it will be retried up to +--low-level-retries times. + + rclone serve dlna remote:path [flags] + +Options + + --addr string ip:port or :port to bind the DLNA http server to. (default ":7879") + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for dlna + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + + rclone serve ftp Serve remote:path over FTP. @@ -2295,6 +2518,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -2310,6 +2534,11 @@ so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be evicted +from the cache. + --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -2369,25 +2598,28 @@ If an upload or download fails it will be retried up to Options - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for ftp - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. (empty value allow every password) - --passive-port string Passive port range to use. (default "30000-32000") - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. (default "anonymous") - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for ftp + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. (empty value allow every password) + --passive-port string Passive port range to use. (default "30000-32000") + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. (default "anonymous") + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) rclone serve http @@ -2513,6 +2745,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -2528,6 +2761,11 @@ so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be evicted +from the cache. + --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -2587,32 +2825,35 @@ If an upload or download fails it will be retried up to Options - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") - --cert string SSL PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for http - --htpasswd string htpasswd file - if not provided no authentication is done - --key string SSL PEM Private key - --max-header-bytes int Maximum size of request header (default 4096) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --realm string realm for authentication (default "rclone") - --server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --cert string SSL PEM key (concatenation of certificate and CA certificate) + --client-ca string Client certificate authority to verify clients with + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for http + --htpasswd string htpasswd file - if not provided no authentication is done + --key string SSL PEM Private key + --max-header-bytes int Maximum size of request header (default 4096) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --realm string realm for authentication (default "rclone") + --server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) rclone serve restic @@ -2889,6 +3130,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -2904,6 +3146,11 @@ so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be evicted +from the cache. + --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -2963,33 +3210,36 @@ If an upload or download fails it will be retried up to Options - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") - --cert string SSL PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --etag-hash string Which hash to use for the ETag, or auto or blank for off - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for webdav - --htpasswd string htpasswd file - if not provided no authentication is done - --key string SSL PEM Private key - --max-header-bytes int Maximum size of request header (default 4096) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --realm string realm for authentication (default "rclone") - --server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --cert string SSL PEM key (concatenation of certificate and CA certificate) + --client-ca string Client certificate authority to verify clients with + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --etag-hash string Which hash to use for the ETag, or auto or blank for off + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for webdav + --htpasswd string htpasswd file - if not provided no authentication is done + --key string SSL PEM Private key + --max-header-bytes int Maximum size of request header (default 4096) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --realm string realm for authentication (default "rclone") + --server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) rclone settier @@ -3260,6 +3510,14 @@ Options Rclone has a number of options to control its behaviour. +Options that take parameters can have the values passed in two ways, +--option=value or --option value. However boolean (true/false) options +behave slightly differently to the other options in that --boolean sets +the option to true and the absence of the flag sets it to false. It is +also possible to specify --boolean=false or --boolean=true. Note that +--boolean false is not valid - this is parsed as --boolean and the false +is parsed as an extra command line argument for rclone. + Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units @@ -3382,6 +3640,9 @@ memory for buffering. See the mount documentation for more details. Set to 0 to disable the buffering for the minimum memory usage. +Note that the memory allocation of the buffers is influenced by the +--use-mmap flag. + --checkers=N The number of checkers to run in parallel. Checkers do the equality @@ -3419,8 +3680,8 @@ Normally the config file is in your home directory as a file called version). If $XDG_CONFIG_HOME is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf -If you run rclone -h and look at the help for the --config option you -will see where the default location is for you. +If you run rclone config file you will see where the default location is +for you. Use this flag to override the config location, eg rclone --config=".myconfig" .config. @@ -3827,8 +4088,8 @@ will fall back to the default behaviour and log an error level message to the console. Note: Encrypted destinations are not supported by --track-renames. -Note that --track-renames uses extra memory to keep track of all the -rename candidates. +Note that --track-renames is incompatible with --no-traverse and that it +uses extra memory to keep track of all the rename candidates. Note also that --track-renames is incompatible with --delete-before and will select --delete-after instead of --delete-during. @@ -3922,6 +4183,21 @@ This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum. +--use-mmap + +If this flag is set then rclone will use anonymous memory allocated by +mmap on Unix based platforms and VirtualAlloc on Windows for its +transfer buffers (size controlled by --buffer-size). Memory allocated +like this does not go on the Go heap and can be returned to the OS +immediately when it is finished with. + +If this flag is not set then rclone will allocate and free the buffers +using the Go memory allocator which may use more memory as memory pages +are returned less aggressively to the OS. + +It is possible this does not work well on all platforms so it is +disabled by default; in the future it may be enabled by default. + --use-server-modtime Some object-store backends (e.g, Swift, S3) do not preserve file @@ -4106,6 +4382,23 @@ This option defaults to false. THIS SHOULD BE USED ONLY FOR TESTING. +--no-traverse + +The --no-traverse flag controls whether the destination file system is +traversed when using the copy or move commands. --no-traverse is not +compatible with sync and will be ignored if you supply it with sync. + +If you are only copying a small number of files (or are filtering most +of the files) and/or have a large number of files on the destination +then --no-traverse will stop rclone listing the destination and save +time. + +However, if you are copying a large number of files, especially if you +are doing a copy where lots of the files under consideration haven't +changed and won't need copying then you shouldn't use --no-traverse. + +See rclone copy for an example of how to use it. + Filtering @@ -4333,17 +4626,15 @@ So first configure rclone on your desktop machine to set up the config file. -Find the config file by running rclone -h and looking for the help for -the --config option +Find the config file by running rclone config file, for example - $ rclone -h - [snip] - --config="/home/user/.rclone.conf": Config file. - [snip] + $ rclone config file + Configuration file is stored at: + /home/user/.rclone.conf Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and -place it in the correct place (use rclone -h on the remote box to find -out where). +place it in the correct place (use rclone config file on the remote box +to find out where). @@ -5050,7 +5341,7 @@ similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the -file to fetch exclisive. Both values can be negative, in which case they +file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file. @@ -5290,8 +5581,6 @@ This takes the following parameters - dstRemote - a path within that remote eg "file2.txt" for the destination -This returns - jobid - ID of async job to query with job/status - Authentication is required for this call. operations/copyurl: Copy the URL to the object @@ -5370,8 +5659,6 @@ This takes the following parameters - dstRemote - a path within that remote eg "file2.txt" for the destination -This returns - jobid - ID of async job to query with job/status - Authentication is required for this call. operations/purge: Remove a directory or container and all of its contents @@ -5448,6 +5735,20 @@ Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this. +For example: + +This sets DEBUG level logs (-vv) + + rclone rc options/set --json '{"main": {"LogLevel": 8}}' + +And this sets INFO level logs (-v) + + rclone rc options/set --json '{"main": {"LogLevel": 7}}' + +And this sets NOTICE level logs (normal without -v) + + rclone rc options/set --json '{"main": {"LogLevel": 6}}' + rc/error: This returns an error This returns an error with the input as part of its error string. Useful @@ -5479,8 +5780,6 @@ This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination -This returns - jobid - ID of async job to query with job/status - See the copy command command for more information on the above. Authentication is required for this call. @@ -5493,8 +5792,6 @@ This takes the following parameters - dstFs - a remote name string eg "drive:dst" for the destination - deleteEmptySrcDirs - delete empty src directories if set -This returns - jobid - ID of async job to query with job/status - See the move command command for more information on the above. Authentication is required for this call. @@ -5506,8 +5803,6 @@ This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination -This returns - jobid - ID of async job to query with job/status - See the sync command command for more information on the above. Authentication is required for this call. @@ -5767,30 +6062,30 @@ Features Here is an overview of the major features of each cloud storage system. - Name Hash ModTime Case Insensitive Duplicate Files MIME Type - ------------------------------ ------------- --------- ------------------ ----------------- ----------- - Amazon Drive MD5 No Yes No R - Amazon S3 MD5 Yes No No R/W - Backblaze B2 SHA1 Yes No No R/W - Box SHA1 Yes Yes No - - Dropbox DBHASH † Yes Yes No - - FTP - No No No - - Google Cloud Storage MD5 Yes No No R/W - Google Drive MD5 Yes No Yes R/W - HTTP - No No No R - Hubic MD5 Yes No No R/W - Jottacloud MD5 Yes Yes No R/W - Mega - No No Yes - - Microsoft Azure Blob Storage MD5 Yes No No R/W - Microsoft OneDrive SHA1 ‡‡ Yes Yes No R - OpenDrive MD5 Yes Yes No - - Openstack Swift MD5 Yes No No R/W - pCloud MD5, SHA1 Yes No No W - QingStor MD5 No No No R/W - SFTP MD5, SHA1 ‡ Yes Depends No - - WebDAV - Yes †† Depends No - - Yandex Disk MD5 Yes No No R/W - The local filesystem All Yes Depends No - + Name Hash ModTime Case Insensitive Duplicate Files MIME Type + ------------------------------ -------------- --------- ------------------ ----------------- ----------- + Amazon Drive MD5 No Yes No R + Amazon S3 MD5 Yes No No R/W + Backblaze B2 SHA1 Yes No No R/W + Box SHA1 Yes Yes No - + Dropbox DBHASH † Yes Yes No - + FTP - No No No - + Google Cloud Storage MD5 Yes No No R/W + Google Drive MD5 Yes No Yes R/W + HTTP - No No No R + Hubic MD5 Yes No No R/W + Jottacloud MD5 Yes Yes No R/W + Mega - No No Yes - + Microsoft Azure Blob Storage MD5 Yes No No R/W + Microsoft OneDrive SHA1 ‡‡ Yes Yes No R + OpenDrive MD5 Yes Yes No - + Openstack Swift MD5 Yes No No R/W + pCloud MD5, SHA1 Yes No No W + QingStor MD5 No No No R/W + SFTP MD5, SHA1 ‡ Yes Depends No - + WebDAV MD5, SHA1 †† Yes ††† Depends No - + Yandex Disk MD5 Yes No No R/W + The local filesystem All Yes Depends No - Hash @@ -5808,7 +6103,9 @@ of all the 4MB block SHA256s. ‡ SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. -†† WebDAV supports modtimes when used with Owncloud and Nextcloud only. +†† WebDAV supports hashes when used with Owncloud and Nextcloud only. + +††† WebDAV supports modtimes when used with Owncloud and Nextcloud only. ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash. @@ -5897,7 +6194,7 @@ more efficient. pCloud Yes Yes Yes Yes Yes No No No #2178 Yes QingStor No Yes No No No Yes No No #2178 No SFTP No No Yes Yes No No Yes No #2178 No - WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 No + WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 Yes Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes The local filesystem Yes No Yes Yes No No Yes No Yes @@ -5967,6 +6264,8 @@ About This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash. +This is also used to return the space used, available for rclone mount. + If the server can't do About then rclone about will return an error. @@ -6402,6 +6701,7 @@ Amazon S3 Storage Providers The S3 backend can be used with a number of different providers: - AWS S3 +- Alibaba Cloud (Aliyun) Object Storage System (OSS) - Ceph - DigitalOcean Spaces - Dreamhost @@ -6609,6 +6909,8 @@ This will guide you through an interactive setup process. \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class \ "ONEZONE_IA" + 6 / Glacier storage class + \ "GLACIER" storage_class> 1 Remote config -------------------- @@ -6658,8 +6960,32 @@ X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns. Multipart uploads rclone supports multipart uploads with S3 which means that it can upload -files bigger than 5GB. Note that files uploaded _both_ with multipart -upload _and_ through crypt remotes do not have MD5 sums. +files bigger than 5GB. + +Note that files uploaded _both_ with multipart upload _and_ through +crypt remotes do not have MD5 sums. + +Rclone switches from single part uploads to multipart uploads at the +point specified by --s3-upload-cutoff. This can be a maximum of 5GB and +a minimum of 0 (ie always upload mulipart files). + +The chunk sizes used in the multipart upload are specified by +--s3-chunk-size and the number of chunks uploaded concurrently is +specified by --s3-upload-concurrency. + +Multipart uploads will use --transfers * --s3-upload-concurrency * +--s3-chunk-size extra memory. Single part uploads to not use extra +memory. + +Single part transfers can be faster than multipart transfers or slower +depending on your latency from S3 - the more latency, the more likely +single part transfers will be faster. + +Increasing --s3-upload-concurrency will increase throughput (8 would be +a sensible value) and increasing --s3-chunk-size also increases +througput (16M would be sensible). Increasing either of these will use +more memory. The default values are high enough to gain most of the +possible performance without using too much memory. Buckets and Regions @@ -6753,9 +7079,10 @@ A proper fix is being worked on in issue #1824. Glacier -You can transition objects to glacier storage using a lifecycle policy. -The bucket can still be synced or copied into normally, but if rclone -tries to access the data you will see an error like below. +You can upload objects using the glacier storage class or transition +them to glacier using a lifecycle policy. The bucket can still be synced +or copied into normally, but if rclone tries to access data from the +glacier storage class you will see an error like below. 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file @@ -6765,7 +7092,8 @@ rclone. Standard Options Here are the standard options specific to s3 (Amazon S3 Compliant -Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)). +Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, +Minio, etc)). --s3-provider @@ -6778,6 +7106,8 @@ Choose your S3 provider. - Examples: - "AWS" - Amazon Web Services (AWS) S3 + - "Alibaba" + - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - "Ceph" - Ceph Object Storage - "DigitalOcean" @@ -6788,6 +7118,8 @@ Choose your S3 provider. - IBM COS S3 - "Minio" - Minio Object Storage + - "Netease" + - Netease Object Storage (NOS) - "Wasabi" - Wasabi Object Storage - "Other" @@ -6860,6 +7192,9 @@ Region to connect to. - "eu-west-2" - EU (London) Region - Needs location constraint eu-west-2. + - "eu-north-1" + - EU (Stockholm) Region + - Needs location constraint eu-north-1. - "eu-central-1" - EU (Frankfurt) Region - Needs location constraint eu-central-1. @@ -6959,9 +7294,9 @@ Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. - "s3.ams-eu-geo.objectstorage.service.networklayer.com" - EU Cross Region Amsterdam Private Endpoint - "s3.eu-gb.objectstorage.softlayer.net" - - Great Britan Endpoint + - Great Britain Endpoint - "s3.eu-gb.objectstorage.service.networklayer.com" - - Great Britan Private Endpoint + - Great Britain Private Endpoint - "s3.ap-geo.objectstorage.softlayer.net" - APAC Cross Regional Endpoint - "s3.tok-ap-geo.objectstorage.softlayer.net" @@ -6989,6 +7324,54 @@ Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. --s3-endpoint +Endpoint for OSS API. + +- Config: endpoint +- Env Var: RCLONE_S3_ENDPOINT +- Type: string +- Default: "" +- Examples: + - "oss-cn-hangzhou.aliyuncs.com" + - East China 1 (Hangzhou) + - "oss-cn-shanghai.aliyuncs.com" + - East China 2 (Shanghai) + - "oss-cn-qingdao.aliyuncs.com" + - North China 1 (Qingdao) + - "oss-cn-beijing.aliyuncs.com" + - North China 2 (Beijing) + - "oss-cn-zhangjiakou.aliyuncs.com" + - North China 3 (Zhangjiakou) + - "oss-cn-huhehaote.aliyuncs.com" + - North China 5 (Huhehaote) + - "oss-cn-shenzhen.aliyuncs.com" + - South China 1 (Shenzhen) + - "oss-cn-hongkong.aliyuncs.com" + - Hong Kong (Hong Kong) + - "oss-us-west-1.aliyuncs.com" + - US West 1 (Silicon Valley) + - "oss-us-east-1.aliyuncs.com" + - US East 1 (Virginia) + - "oss-ap-southeast-1.aliyuncs.com" + - Southeast Asia Southeast 1 (Singapore) + - "oss-ap-southeast-2.aliyuncs.com" + - Asia Pacific Southeast 2 (Sydney) + - "oss-ap-southeast-3.aliyuncs.com" + - Southeast Asia Southeast 3 (Kuala Lumpur) + - "oss-ap-southeast-5.aliyuncs.com" + - Asia Pacific Southeast 5 (Jakarta) + - "oss-ap-northeast-1.aliyuncs.com" + - Asia Pacific Northeast 1 (Japan) + - "oss-ap-south-1.aliyuncs.com" + - Asia Pacific South 1 (Mumbai) + - "oss-eu-central-1.aliyuncs.com" + - Central Europe 1 (Frankfurt) + - "oss-eu-west-1.aliyuncs.com" + - West Europe (London) + - "oss-me-east-1.aliyuncs.com" + - Middle East 1 (Dubai) + +--s3-endpoint + Endpoint for S3 API. Required when using an S3 clone. - Config: endpoint @@ -7033,6 +7416,8 @@ creating buckets only. - EU (Ireland) Region. - "eu-west-2" - EU (London) Region. + - "eu-north-1" + - EU (Stockholm) Region. - "EU" - EU Region. - "ap-southeast-1" @@ -7075,7 +7460,7 @@ For on-prem COS, do not make a selection from this list, hit enter - "us-east-flex" - US East Region Flex - "us-south-standard" - - US Sout hRegion Standard + - US South Region Standard - "us-south-vault" - US South Region Vault - "us-south-cold" @@ -7091,13 +7476,13 @@ For on-prem COS, do not make a selection from this list, hit enter - "eu-flex" - EU Cross Region Flex - "eu-gb-standard" - - Great Britan Standard + - Great Britain Standard - "eu-gb-vault" - - Great Britan Vault + - Great Britain Vault - "eu-gb-cold" - - Great Britan Cold + - Great Britain Cold - "eu-gb-flex" - - Great Britan Flex + - Great Britain Flex - "ap-standard" - APAC Standard - "ap-vault" @@ -7137,6 +7522,9 @@ not sure. Used when creating buckets only. Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for +creating buckets too. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl @@ -7238,18 +7626,80 @@ The storage class to use when storing new objects in S3. - Standard Infrequent Access storage class - "ONEZONE_IA" - One Zone Infrequent Access storage class + - "GLACIER" + - Glacier storage class + +--s3-storage-class + +The storage class to use when storing new objects in OSS. + +- Config: storage_class +- Env Var: RCLONE_S3_STORAGE_CLASS +- Type: string +- Default: "" +- Examples: + - "" + - Default + - "STANDARD" + - Standard storage class + - "GLACIER" + - Archive storage mode. + - "STANDARD_IA" + - Infrequent access storage mode. Advanced Options Here are the advanced options specific to s3 (Amazon S3 Compliant -Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)). +Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, +Minio, etc)). + +--s3-bucket-acl + +Canned ACL used when creating buckets. + +For more info visit +https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + +Note that this ACL is applied when only when creating buckets. If it +isn't set then "acl" is used instead. + +- Config: bucket_acl +- Env Var: RCLONE_S3_BUCKET_ACL +- Type: string +- Default: "" +- Examples: + - "private" + - Owner gets FULL_CONTROL. No one else has access rights + (default). + - "public-read" + - Owner gets FULL_CONTROL. The AllUsers group gets READ + access. + - "public-read-write" + - Owner gets FULL_CONTROL. The AllUsers group gets READ and + WRITE access. + - Granting this on a bucket is generally not recommended. + - "authenticated-read" + - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets + READ access. + +--s3-upload-cutoff + +Cutoff for switching to chunked upload + +Any files larger than this will be uploaded in chunks of chunk_size. The +minimum is 0 and the maximum is 5GB. + +- Config: upload_cutoff +- Env Var: RCLONE_S3_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200M --s3-chunk-size Chunk size to use for uploading. -Any files larger than this will be uploaded in chunks of this size. The -default is 5MB. The minimum is 5MB. +When uploading files larger than upload_cutoff they will be uploaded as +multipart uploads using this chunk size. Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer. @@ -7294,7 +7744,7 @@ this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_S3_UPLOAD_CONCURRENCY - Type: int -- Default: 2 +- Default: 4 --s3-force-path-style @@ -7698,6 +8148,29 @@ So once set up, for example to copy files into a bucket rclone copy /path/to/files minio:bucket +Scaleway + +Scaleway The Object Storage platform allows you to store anything from +backups, logs and web assets to documents and photos. Files can be +dropped from the Scaleway console or transferred through our API and CLI +or using any S3-compatible tool. + +Scaleway provides an S3 interface which can be configured for use with +rclone like this: + + [scaleway] + type = s3 + env_auth = false + endpoint = s3.nl-ams.scw.cloud + access_key_id = SCWXXXXXXXXXXXXXX + secret_access_key = 1111111-2222-3333-44444-55555555555555 + region = nl-ams + location_constraint = + acl = private + force_path_style = false + server_side_encryption = + storage_class = + Wasabi Wasabi is a cloud-based object storage service for a broad range of @@ -7808,29 +8281,40 @@ This will leave the config file looking like this. server_side_encryption = storage_class = -Aliyun OSS / Netease NOS +Alibaba OSS -This describes how to set up Aliyun OSS - Netease NOS is the same except -for different endpoints. +Here is an example of making an Alibaba Cloud (Aliyun) OSS +configuration. First run: -Note this is a pretty standard S3 setup, except for the setting of -force_path_style = false in the advanced config. + rclone config - # rclone config - e/n/d/r/c/s/q> n +This will guide you through an interactive setup process. + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n name> oss Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) + [snip] + 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) \ "s3" + [snip] Storage> s3 Choose your S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 8 / Any other S3 compatible provider - \ "Other" - provider> other + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun + \ "Alibaba" + 3 / Ceph Object Storage + \ "Ceph" + [snip] + provider> Alibaba Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). @@ -7843,73 +8327,74 @@ force_path_style = false in the advanced config. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). - access_key_id> xxxxxxxxxxxx + access_key_id> accesskeyid AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). - secret_access_key> xxxxxxxxxxxxxxxxx - Region to connect to. - Leave blank if you are using an S3 clone and you don't have a region. + secret_access_key> secretaccesskey + Endpoint for OSS API. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 1 / Use this if unsure. Will use v4 signatures and an empty region. - \ "" - 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. - \ "other-v2-signature" - region> 1 - Endpoint for S3 API. - Required when using an S3 clone. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - endpoint> oss-cn-shenzhen.aliyuncs.com - Location constraint - must be set to match the Region. - Leave blank if not sure. Used when creating buckets only. - Enter a string value. Press Enter for the default (""). - location_constraint> - Canned ACL used when creating buckets and/or storing objects in S3. - For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + 1 / East China 1 (Hangzhou) + \ "oss-cn-hangzhou.aliyuncs.com" + 2 / East China 2 (Shanghai) + \ "oss-cn-shanghai.aliyuncs.com" + 3 / North China 1 (Qingdao) + \ "oss-cn-qingdao.aliyuncs.com" + [snip] + endpoint> 1 + Canned ACL used when creating buckets and storing or copying objects. + + Note that this ACL is applied when server side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. + \ "public-read" + / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. + [snip] acl> 1 + The storage class to use when storing new objects in OSS. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Standard storage class + \ "STANDARD" + 3 / Archive storage mode. + \ "GLACIER" + 4 / Infrequent access storage mode. + \ "STANDARD_IA" + storage_class> 1 Edit advanced config? (y/n) y) Yes n) No - y/n> y - Chunk size to use for uploading - Enter a size with suffix k,M,G,T. Press Enter for the default ("5M"). - chunk_size> - Don't store MD5 checksum with object metadata - Enter a boolean value (true or false). Press Enter for the default ("false"). - disable_checksum> - An AWS session token - Enter a string value. Press Enter for the default (""). - session_token> - Concurrency for multipart uploads. - Enter a signed integer. Press Enter for the default ("2"). - upload_concurrency> - If true use path style access if false use virtual hosted style. - Some providers (eg Aliyun OSS or Netease COS) require this. - Enter a boolean value (true or false). Press Enter for the default ("true"). - force_path_style> false + y/n> n Remote config -------------------- [oss] type = s3 - provider = Other + provider = Alibaba env_auth = false - access_key_id = xxxxxxxxx - secret_access_key = xxxxxxxxxxxxx - endpoint = oss-cn-shenzhen.aliyuncs.com + access_key_id = accesskeyid + secret_access_key = secretaccesskey + endpoint = oss-cn-hangzhou.aliyuncs.com acl = private - force_path_style = false + storage_class = Standard -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y +Netease NOS + +For Netease NOS configure as per the configurator rclone config setting +the provider Netease. This will automatically set +force_path_style = false which is necessary for it to run properly. + Backblaze B2 @@ -7922,9 +8407,11 @@ Here is an example of making a b2 configuration. First run rclone config -This will guide you through an interactive setup process. You will need -your account number (a short hex number) and key (a long hex number) -which you can get from the b2 control panel. +This will guide you through an interactive setup process. To +authenticate you will either need your Account ID (a short hex number) +and Master Application Key (a long hex number) OR an Application Key, +which is the recommended method. See below for further details on +generating and using an Application Key. No remotes found - make a new one n) New remote @@ -8002,13 +8489,14 @@ Application Keys B2 supports multiple Application Keys for different access permission to B2 Buckets. -You can use these with rclone too. +You can use these with rclone too; you will need to use rclone version +1.43 or later. Follow Backblaze's docs to create an Application Key with the required -permission and add the Application Key ID as the account and the +permission and add the applicationKeyId as the account and the Application Key itself as the key. -Note that you must put the Application Key ID as the account - you can't +Note that you must put the _applicationKeyId_ as the account – you can't use the master Account ID. If you try then B2 will return 401 errors. --fast-list @@ -8081,8 +8569,8 @@ versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff. -Note that cleanup does not remove partially uploaded files from the -bucket. +Note that cleanup will remove partially uploaded files from the bucket +if they are more than a day old. When you purge a bucket, the current and the old versions will be deleted then the bucket will be deleted. @@ -8271,7 +8759,7 @@ Upload chunk size. Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of -"--transfers" chunks in progress at once. 5,000,000 Bytes is the minimim +"--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size. - Config: chunk_size @@ -8279,6 +8767,15 @@ size. - Type: SizeSuffix - Default: 96M +--b2-disable-checksum + +Disable checksums for large (> upload cutoff) files + +- Config: disable_checksum +- Env Var: RCLONE_B2_DISABLE_CHECKSUM +- Type: bool +- Default: false + Box @@ -8385,6 +8882,17 @@ To copy a local directory to an Box directory called backup rclone copy /home/source remote:backup +Using rclone with an Enterprise account with SSO + +If you have an "Enterprise" account type with Box with single sign on +(SSO), you need to create a password to use Box with rclone. This can be +done at your Enterprise Box account by going to Settings, "Account" Tab, +and then set the password in the "Authentication" field. + +Once you have done this, you can setup your Enterprise Box account using +the same procedure detailed above in the, using the password you have +just set. + Invalid refresh token According to the box docs: @@ -9991,6 +10499,9 @@ Note that --bind isn't supported. FTP could support server side move but doesn't yet. +Note that the ftp backend does not support the ftp_proxy environment +variable yet. + Google Cloud Storage @@ -10332,16 +10843,26 @@ Location for the newly created buckets. - Multi-regional location for United States. - "asia-east1" - Taiwan. + - "asia-east2" + - Hong Kong. - "asia-northeast1" - Tokyo. + - "asia-south1" + - Mumbai. - "asia-southeast1" - Singapore. - "australia-southeast1" - Sydney. + - "europe-north1" + - Finland. - "europe-west1" - Belgium. - "europe-west2" - London. + - "europe-west3" + - Frankfurt. + - "europe-west4" + - Netherlands. - "us-central1" - Iowa. - "us-east1" @@ -10350,6 +10871,8 @@ Location for the newly created buckets. - Northern Virginia. - "us-west1" - Oregon. + - "us-west2" + - California. --gcs-storage-class @@ -11144,6 +11667,24 @@ If Object's are greater, use drive v2 API to download. - Type: SizeSuffix - Default: off +--drive-pacer-min-sleep + +Minimum time to sleep between API calls. + +- Config: pacer_min_sleep +- Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP +- Type: Duration +- Default: 100ms + +--drive-pacer-burst + +Number of API calls to allow without sleeping. + +- Config: pacer_burst +- Env Var: RCLONE_DRIVE_PACER_BURST +- Type: int +- Default: 100 + Limitations Drive has quite a lot of rate limiting. This causes rclone to be limited @@ -11200,9 +11741,12 @@ that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. -However you might find you get better performance making your own -client_id if you are a heavy user. Or you may not depending on exactly -how Google have been raising rclone's rate limit. +It is strongly recommended to use your own client ID as the default +rclone ID is heavily used. If you have multiple services running, it is +recommended to use an API key for each service. The default Google quota +is 10 transactions per second so it is recommended to stay under that +number as if you use more than that, it will cause rclone to rate limit +and make things slower. Here is how to create your own Google Drive client ID for rclone: @@ -11363,6 +11907,8 @@ URL of http host to connect to - Examples: - "https://example.com" - Connect to example.com + - "https://user:pass@example.com" + - Connect to example.com using a username and password Hubic @@ -11524,6 +12070,24 @@ default for this is 5GB which is its maximum value. - Type: SizeSuffix - Default: 5G +--hubic-no-chunk + +Don't chunk files during streaming upload. + +When doing streaming uploads (eg using rcat or mount) setting this flag +will cause the swift backend to not upload chunked files. + +This will limit the maximum upload size to 5GB. However non chunked +files are easier to deal with and have an MD5SUM. + +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. + +- Config: no_chunk +- Env Var: RCLONE_HUBIC_NO_CHUNK +- Type: bool +- Default: false + Limitations This uses the normal OpenStack Swift mechanism to refresh the Swift API @@ -11661,22 +12225,13 @@ Here are the standard options specific to jottacloud (JottaCloud). --jottacloud-user -User Name +User Name: - Config: user - Env Var: RCLONE_JOTTACLOUD_USER - Type: string - Default: "" ---jottacloud-pass - -Password. - -- Config: pass -- Env Var: RCLONE_JOTTACLOUD_PASS -- Type: string -- Default: "" - --jottacloud-mountpoint The mountpoint to use. @@ -11725,6 +12280,15 @@ public link. - Type: bool - Default: false +--jottacloud-upload-resume-limit + +Files bigger than this can be resumed if the upload fail's. + +- Config: upload_resume_limit +- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT +- Type: SizeSuffix +- Default: 10M + Limitations Note that Jottacloud is case insensitive so you can't have a file called @@ -12408,12 +12972,22 @@ they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. -The largest allowed file size is 10GiB (10,737,418,240 bytes). +The largest allowed file sizes are 15GB for OneDrive for Business and +35GB for OneDrive Personal (Updated 4 Jan 2019). + +The entire path, including the file name, must contain fewer than 400 +characters for OneDrive, OneDrive for Business and SharePoint Online. If +you are encrypting file and folder names with rclone, you may want to +pay attention to this limitation because the encrypted names are +typically longer than the original ones. OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:. See #2707 for more info. +An official document about the limitations for different types of +OneDrive can be found here. + Versioning issue Every change in OneDrive causes the service to create a new version. @@ -12424,6 +12998,25 @@ space. The copy is the only rclone command affected by this as we copy the file and then afterwards set the modification time to match the source file. +NOTE: Starting October 2018, users will no longer be able to disable +versioning by default. This is because Microsoft has brought an update +to the mechanism. To change this new default setting, a PowerShell +command is required to be run by a SharePoint admin. If you are an +admin, you can run these commands in PowerShell to change that setting: + +1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case + you haven't installed this already) +2. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking +3. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM + (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this + will prompt for your credentials) +4. Set-SPOTenant -EnableMinimumVersionRequirement $False +5. Disconnect-SPOService (to disconnect from the server) + +_Below are the steps for normal users to disable versioning. If you +don't see the "No Versioning" option, make sure the above requirements +are met._ + User Weropol has found a method to disable versioning on OneDrive 1. Open the settings menu by clicking on the gear symbol at the top of @@ -12804,6 +13397,55 @@ Number of connection retries. - Type: int - Default: 3 +--qingstor-upload-cutoff + +Cutoff for switching to chunked upload + +Any files larger than this will be uploaded in chunks of chunk_size. The +minimum is 0 and the maximum is 5GB. + +- Config: upload_cutoff +- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200M + +--qingstor-chunk-size + +Chunk size to use for uploading. + +When uploading files larger than upload_cutoff they will be uploaded as +multipart uploads using this chunk size. + +Note that "--qingstor-upload-concurrency" chunks of this size are +buffered in memory per transfer. + +If you are transferring large files over high speed links and you have +enough memory, then increasing this will speed up the transfers. + +- Config: chunk_size +- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE +- Type: SizeSuffix +- Default: 4M + +--qingstor-upload-concurrency + +Concurrency for multipart uploads. + +This is the number of chunks of the same file that are uploaded +concurrently. + +NB if you set this to > 1 then the checksums of multpart uploads become +corrupted (the uploads themselves are not corrupted though). + +If you are uploading small numbers of large file over high speed link +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. + +- Config: upload_concurrency +- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY +- Type: int +- Default: 1 + Swift @@ -13190,6 +13832,33 @@ Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - Type: string - Default: "" +--swift-application-credential-id + +Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + +- Config: application_credential_id +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID +- Type: string +- Default: "" + +--swift-application-credential-name + +Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + +- Config: application_credential_name +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME +- Type: string +- Default: "" + +--swift-application-credential-secret + +Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + +- Config: application_credential_secret +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET +- Type: string +- Default: "" + --swift-auth-version AuthVersion - optional - set to (1,2,3) if your auth URL has no version @@ -13253,6 +13922,24 @@ default for this is 5GB which is its maximum value. - Type: SizeSuffix - Default: 5G +--swift-no-chunk + +Don't chunk files during streaming upload. + +When doing streaming uploads (eg using rcat or mount) setting this flag +will cause the swift backend to not upload chunked files. + +This will limit the maximum upload size to 5GB. However non chunked +files are easier to deal with and have an MD5SUM. + +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. + +- Config: no_chunk +- Env Var: RCLONE_SWIFT_NO_CHUNK +- Type: bool +- Default: false + Modified time The modified time is stored as metadata on the object as @@ -13553,12 +14240,20 @@ The SFTP remote supports three authentication methods: - Key file - ssh-agent -Key files should be unencrypted PEM-encoded private key files. For -instance /home/$USER/.ssh/id_rsa. +Key files should be PEM-encoded private key files. For instance +/home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files +are supported. If you don't specify pass or key_file then rclone will attempt to contact an ssh-agent. +You can also specify key_use_agent to force the usage of an ssh-agent. +In this case key_file can also be specified to force the usage of a +specific key in the ssh-agent. + +Using an ssh-agent is the only way to load encrypted OpenSSH keys at the +moment. + If you set the --sftp-ask-password option, rclone will prompt for a password when needed and no password has been configured. @@ -13633,14 +14328,40 @@ SSH password, leave blank to use ssh-agent. --sftp-key-file -Path to unencrypted PEM-encoded private key file, leave blank to use -ssh-agent. +Path to PEM-encoded private key file, leave blank or set key-use-agent +to use ssh-agent. - Config: key_file - Env Var: RCLONE_SFTP_KEY_FILE - Type: string - Default: "" +--sftp-key-file-pass + +The passphrase to decrypt the PEM-encoded private key file. + +Only PEM encrypted key files (old OpenSSH format) are supported. +Encrypted keys in the new OpenSSH format can't be used. + +- Config: key_file_pass +- Env Var: RCLONE_SFTP_KEY_FILE_PASS +- Type: string +- Default: "" + +--sftp-key-use-agent + +When set forces the usage of the ssh-agent. + +When key-file is also set, the ".pub" file of the specified key-file is +read and only the associated key is requested from the ssh-agent. This +allows to avoid Too many authentication failures for *username* errors +when the ssh-agent contains many keys. + +- Config: key_use_agent +- Env Var: RCLONE_SFTP_KEY_USE_AGENT +- Type: bool +- Default: false + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may @@ -13990,7 +14711,10 @@ Modified time and hashes Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times. -Hashes are not supported. +Likewise plain WebDAV does not support hashes, however when used with +Owncloud or Nexcloud rclone will support SHA1 and MD5 hashes. Depending +on the exact version of Owncloud or Nextcloud hashes may appear on all +objects, or only on objects which had a hash uploaded with them. Standard Options @@ -14273,6 +14997,18 @@ Quota information To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage. +Limitations + +When uploading very large files (bigger than about 5GB) you will need to +increase the --timeout parameter. This is because Yandex pauses (perhaps +to calculate the MD5SUM for the entire file) before returning +confirmation that the file has been uploaded. The default handling of +timeouts in rclone is to assume a 5 minute pause is an error and close +the connection - you'll see net/http: timeout awaiting response headers +errors in the logs if this is happening. Setting the timeout to twice +the max size of file in GB should be enough, so if you want to upload a +30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m. + Standard Options Here are the standard options specific to yandex (Yandex Disk). @@ -14383,7 +15119,8 @@ Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply --copy-links or -L then rclone will follow the symlink and -copy the pointed to file or directory. +copy the pointed to file or directory. Note that this flag is +incompatible with -links / -l. This flag applies to all commands. @@ -14412,6 +15149,65 @@ and 6 b/two 6 b/one +--links, -l + +Normally rclone will ignore symlinks or junction points (which behave +like symlinks under Windows). + +If you supply this flag then rclone will copy symbolic links from the +local storage, and store them as text files, with a '.rclonelink' suffix +in the remote storage. + +The text file will contain the target of the symbolic link (see +example). + +This flag applies to all commands. + +For example, supposing you have a directory structure like this + + $ tree /tmp/a + /tmp/a + ├── file1 -> ./file4 + └── file2 -> /home/user/file3 + +Copying the entire directory with '-l' + + $ rclone copyto -l /tmp/a/file1 remote:/tmp/a/ + +The remote files are created with a '.rclonelink' suffix + + $ rclone ls remote:/tmp/a + 5 file1.rclonelink + 14 file2.rclonelink + +The remote files will contain the target of the symbolic links + + $ rclone cat remote:/tmp/a/file1.rclonelink + ./file4 + + $ rclone cat remote:/tmp/a/file2.rclonelink + /home/user/file3 + +Copying them back with '-l' + + $ rclone copyto -l remote:/tmp/a/ /tmp/b/ + + $ tree /tmp/b + /tmp/b + ├── file1 -> ./file4 + └── file2 -> /home/user/file3 + +However, if copied back without '-l' + + $ rclone copyto remote:/tmp/a/ /tmp/b/ + + $ tree /tmp/b + /tmp/b + ├── file1.rclonelink + └── file2.rclonelink + +Note that this flag is incompatible with -copy-links / -L. + Restricting filesystems with --one-file-system Normally rclone will recurse through filesystems as mounted. @@ -14478,6 +15274,15 @@ Follow symlinks and copy the pointed to item. - Type: bool - Default: false +--links + +Translate symlinks to/from regular files with a '.rclonelink' extension + +- Config: links +- Env Var: RCLONE_LOCAL_LINKS +- Type: bool +- Default: false + --skip-links Don't warn about skipped symlinks. This flag disables warning messages @@ -14532,6 +15337,195 @@ Don't cross filesystem boundaries (unix/macOS only). CHANGELOG +v1.46 - 2019-02-09 + +- New backends + - Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick + Craig-Wood) +- New commands + - serve dlna: serves a remove via DLNA for the local network + (nicolov) +- New Features + - copy, move: Restore deprecated --no-traverse flag (Nick + Craig-Wood) + - This is useful for when transferring a small number of files + into a large destination + - genautocomplete: Add remote path completion for bash completion + (Christopher Peterson & Danil Semelenov) + - Buffer memory handling reworked to return memory to the OS + better (Nick Craig-Wood) + - Buffer recycling library to replace sync.Pool + - Optionally use memory mapped memory for better memory + shrinking + - Enable with --use-mmap if having memory problems - not + default yet + - Parallelise reading of files specified by --files-from (Nick + Craig-Wood) + - check: Add stats showing total files matched. (Dario Guzik) + - Allow rename/delete open files under Windows (Nick Craig-Wood) + - lsjson: Use exactly the correct number of decimal places in the + seconds (Nick Craig-Wood) + - Add cookie support with cmdline switch --use-cookies for all + HTTP based remotes (qip) + - Warn if --checksum is set but there are no hashes available + (Nick Craig-Wood) + - Rework rate limiting (pacer) to be more accurate and allow + bursting (Nick Craig-Wood) + - Improve error reporting for too many/few arguments in commands + (Nick Craig-Wood) + - listremotes: Remove -l short flag as it conflicts with the new + global flag (weetmuts) + - Make http serving with auth generate INFO messages on auth fail + (Nick Craig-Wood) +- Bug Fixes + - Fix layout of stats (Nick Craig-Wood) + - Fix --progress crash under Windows Jenkins (Nick Craig-Wood) + - Fix transfer of google/onedrive docs by calling Rcat in Copy + when size is -1 (Cnly) + - copyurl: Fix checking of --dry-run (Denis Skovpen) +- Mount + - Check that mountpoint and local directory to mount don't overlap + (Nick Craig-Wood) + - Fix mount size under 32 bit Windows (Nick Craig-Wood) +- VFS + - Implement renaming of directories for backends without DirMove + (Nick Craig-Wood) + - now all backends except b2 support renaming directories + - Implement --vfs-cache-max-size to limit the total size of the + cache (Nick Craig-Wood) + - Add --dir-perms and --file-perms flags to set default + permissions (Nick Craig-Wood) + - Fix deadlock on concurrent operations on a directory (Nick + Craig-Wood) + - Fix deadlock between RWFileHandle.close and File.Remove (Nick + Craig-Wood) + - Fix renaming/deleting open files with cache mode "writes" under + Windows (Nick Craig-Wood) + - Fix panic on rename with --dry-run set (Nick Craig-Wood) + - Fix vfs/refresh with recurse=true needing the --fast-list flag +- Local + - Add support for -l/--links (symbolic link translation) + (yair@unicorn) + - this works by showing links as link.rclonelink - see local + backend docs for more info + - this errors if used with -L/--copy-links + - Fix renaming/deleting open files on Windows (Nick Craig-Wood) +- Crypt + - Check for maximum length before decrypting filename to fix panic + (Garry McNulty) +- Azure Blob + - Allow building azureblob backend on *BSD (themylogin) + - Use the rclone HTTP client to support --dump headers, --tpslimit + etc (Nick Craig-Wood) + - Use the s3 pacer for 0 delay in non error conditions (Nick + Craig-Wood) + - Ignore directory markers (Nick Craig-Wood) + - Stop Mkdir attempting to create existing containers (Nick + Craig-Wood) +- B2 + - cleanup: will remove unfinished large files >24hrs old (Garry + McNulty) + - For a bucket limited application key check the bucket name (Nick + Craig-Wood) + - before this, rclone would use the authorised bucket + regardless of what you put on the command line + - Added --b2-disable-checksum flag (Wojciech Smigielski) + - this enables large files to be uploaded without a SHA-1 hash + for speed reasons +- Drive + - Set default pacer to 100ms for 10 tps (Nick Craig-Wood) + - This fits the Google defaults much better and reduces the + 403 errors massively + - Add --drive-pacer-min-sleep and --drive-pacer-burst to + control the pacer + - Improve ChangeNotify support for items with multiple parents + (Fabian Möller) + - Fix ListR for items with multiple parents - this fixes oddities + with vfs/refresh (Fabian Möller) + - Fix using --drive-impersonate and appfolders (Nick Craig-Wood) + - Fix google docs in rclone mount for some (not all) applications + (Nick Craig-Wood) +- Dropbox + - Retry-After support for Dropbox backend (Mathieu Carbou) +- FTP + - Wait for 60 seconds for a connection to Close then declare it + dead (Nick Craig-Wood) + - helps with indefinite hangs on some FTP servers +- Google Cloud Storage + - Update google cloud storage endpoints (weetmuts) +- HTTP + - Add an example with username and password which is supported but + wasn't documented (Nick Craig-Wood) + - Fix backend with --files-from and non-existent files (Nick + Craig-Wood) +- Hubic + - Make error message more informative if authentication fails + (Nick Craig-Wood) +- Jottacloud + - Resume and deduplication support (Oliver Heyme) + - Use token auth for all API requests Don't store password anymore + (Sebastian Bünger) + - Add support for 2-factor authentification (Sebastian Bünger) +- Mega + - Implement v2 account login which fixes logins for newer Mega + accounts (Nick Craig-Wood) + - Return error if an unknown length file is attempted to be + uploaded (Nick Craig-Wood) + - Add new error codes for better error reporting (Nick Craig-Wood) +- Onedrive + - Fix broken support for "shared with me" folders (Alex Chen) + - Fix root ID not normalised (Cnly) + - Return err instead of panic on unknown-sized uploads (Cnly) +- Qingstor + - Fix go routine leak on multipart upload errors (Nick Craig-Wood) + - Add upload chunk size/concurrency/cutoff control (Nick + Craig-Wood) + - Default --qingstor-upload-concurrency to 1 to work around bug + (Nick Craig-Wood) +- S3 + - Implement --s3-upload-cutoff for single part uploads below this + (Nick Craig-Wood) + - Change --s3-upload-concurrency default to 4 to increase + perfomance (Nick Craig-Wood) + - Add --s3-bucket-acl to control bucket ACL (Nick Craig-Wood) + - Auto detect region for buckets on operation failure (Nick + Craig-Wood) + - Add GLACIER storage class (William Cocker) + - Add Scaleway to s3 documentation (Rémy Léone) + - Add AWS endpoint eu-north-1 (weetmuts) +- SFTP + - Add support for PEM encrypted private keys (Fabian Möller) + - Add option to force the usage of an ssh-agent (Fabian Möller) + - Perform environment variable expansion on key-file (Fabian + Möller) + - Fix rmdir on Windows based servers (eg CrushFTP) (Nick + Craig-Wood) + - Fix rmdir deleting directory contents on some SFTP servers (Nick + Craig-Wood) + - Fix error on dangling symlinks (Nick Craig-Wood) +- Swift + - Add --swift-no-chunk to disable segmented uploads in rcat/mount + (Nick Craig-Wood) + - Introduce application credential auth support (kayrus) + - Fix memory usage by slimming Object (Nick Craig-Wood) + - Fix extra requests on upload (Nick Craig-Wood) + - Fix reauth on big files (Nick Craig-Wood) +- Union + - Fix poll-interval not working (Nick Craig-Wood) +- WebDAV + - Support About which means rclone mount will show the correct + disk size (Nick Craig-Wood) + - Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick + Craig-Wood) + - Fail soft on time parsing errors (Nick Craig-Wood) + - Fix infinite loop on failed directory creation (Nick Craig-Wood) + - Fix identification of directories for Bitrix Site Manager (Nick + Craig-Wood) + - Fix upload of 0 length files on some servers (Nick Craig-Wood) + - Fix if MKCOL fails with 423 Locked assume the directory exists + (Nick Craig-Wood) + + v1.45 - 2018-11-24 - New backends @@ -16448,8 +17442,7 @@ all the remote storage systems. Can I copy the config from one machine to another Sure! Rclone stores all of its config in a single file. If you want to -find this file, the simplest way is to run rclone -h and look at the -help for the --config flag which will tell you where it is. +find this file, run rclone config file which will tell you where it is. See the remote setup docs for more info. @@ -16525,8 +17518,6 @@ In general the variables are called http_proxy (for services reached over http) and https_proxy (for services reached over https). Most public services will be using https, but you may wish to set both. -If you ever use FTP then you would need to set ftp_proxy. - The content of the variable is protocol://server:port. The protocol value is the one used to talk to the proxy server, itself, and is commonly either http or socks5. @@ -16550,6 +17541,8 @@ e.g. export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy +Note that the ftp backend does not support ftp_proxy yet. + Rclone gives x509: failed to load system roots and no roots provided error This means that rclone can't file the SSL root certificates. Likely you @@ -16793,6 +17786,7 @@ Contributors - Antoine GIRARD sapk@users.noreply.github.com - Mateusz Piotrowski mpp302@gmail.com - Animosity022 animosity22@users.noreply.github.com + earl.texter@gmail.com - Peter Baumgartner pete@lincolnloop.com - Craig Rachel craig@craigrachel.com - Michael G. Noll miguno@users.noreply.github.com @@ -16856,6 +17850,25 @@ Contributors - Peter Kaminski kaminski@istori.com - Henry Ptasinski henry@logout.com - Alexander kharkovalexander@gmail.com +- Garry McNulty garrmcnu@gmail.com +- Mathieu Carbou mathieu.carbou@gmail.com +- Mark Otway mark@otway.com +- William Cocker 37018962+WilliamCocker@users.noreply.github.com +- François Leurent 131.js@cloudyks.org +- Arkadius Stefanski arkste@gmail.com +- Jay dev@jaygoel.com +- andrea rota a@xelera.eu +- nicolov nicolov@users.noreply.github.com +- Dario Guzik dario@guzik.com.ar +- qip qip@users.noreply.github.com +- yair@unicorn yair@unicorn +- Matt Robinson brimstone@the.narro.ws +- kayrus kay.diam@gmail.com +- Rémy Léone remy.leone@gmail.com +- Wojciech Smigielski wojciech.hieronim.smigielski@gmail.com +- weetmuts oehrstroem@gmail.com +- Jonathan vanillajonathan@users.noreply.github.com +- James Carpenter orbsmiv@users.noreply.github.com diff --git a/docs/content/b2.md b/docs/content/b2.md index 1652750b8..11f23a4c8 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -393,12 +393,21 @@ Upload chunk size. Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the -minimim size. +minimum size. - Config: chunk_size - Env Var: RCLONE_B2_CHUNK_SIZE - Type: SizeSuffix - Default: 96M +#### --b2-disable-checksum + +Disable checksums for large (> upload cutoff) files + +- Config: disable_checksum +- Env Var: RCLONE_B2_DISABLE_CHECKSUM +- Type: bool +- Default: false + diff --git a/docs/content/changelog.md b/docs/content/changelog.md index 882fa3b39..ec35e9d63 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -1,11 +1,140 @@ --- title: "Documentation" description: "Rclone Changelog" -date: "2018-11-24" +date: "2019-02-09" --- # Changelog +## v1.46 - 2019-02-09 + +* New backends + * Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood) +* New commands + * serve dlna: serves a remove via DLNA for the local network (nicolov) +* New Features + * copy, move: Restore deprecated `--no-traverse` flag (Nick Craig-Wood) + * This is useful for when transferring a small number of files into a large destination + * genautocomplete: Add remote path completion for bash completion (Christopher Peterson & Danil Semelenov) + * Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood) + * Buffer recycling library to replace sync.Pool + * Optionally use memory mapped memory for better memory shrinking + * Enable with `--use-mmap` if having memory problems - not default yet + * Parallelise reading of files specified by `--files-from` (Nick Craig-Wood) + * check: Add stats showing total files matched. (Dario Guzik) + * Allow rename/delete open files under Windows (Nick Craig-Wood) + * lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood) + * Add cookie support with cmdline switch `--use-cookies` for all HTTP based remotes (qip) + * Warn if `--checksum` is set but there are no hashes available (Nick Craig-Wood) + * Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood) + * Improve error reporting for too many/few arguments in commands (Nick Craig-Wood) + * listremotes: Remove `-l` short flag as it conflicts with the new global flag (weetmuts) + * Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood) +* Bug Fixes + * Fix layout of stats (Nick Craig-Wood) + * Fix `--progress` crash under Windows Jenkins (Nick Craig-Wood) + * Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly) + * copyurl: Fix checking of `--dry-run` (Denis Skovpen) +* Mount + * Check that mountpoint and local directory to mount don't overlap (Nick Craig-Wood) + * Fix mount size under 32 bit Windows (Nick Craig-Wood) +* VFS + * Implement renaming of directories for backends without DirMove (Nick Craig-Wood) + * now all backends except b2 support renaming directories + * Implement `--vfs-cache-max-size` to limit the total size of the cache (Nick Craig-Wood) + * Add `--dir-perms` and `--file-perms` flags to set default permissions (Nick Craig-Wood) + * Fix deadlock on concurrent operations on a directory (Nick Craig-Wood) + * Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood) + * Fix renaming/deleting open files with cache mode "writes" under Windows (Nick Craig-Wood) + * Fix panic on rename with `--dry-run` set (Nick Craig-Wood) + * Fix vfs/refresh with recurse=true needing the `--fast-list` flag +* Local + * Add support for `-l`/`--links` (symbolic link translation) (yair@unicorn) + * this works by showing links as `link.rclonelink` - see local backend docs for more info + * this errors if used with `-L`/`--copy-links` + * Fix renaming/deleting open files on Windows (Nick Craig-Wood) +* Crypt + * Check for maximum length before decrypting filename to fix panic (Garry McNulty) +* Azure Blob + * Allow building azureblob backend on *BSD (themylogin) + * Use the rclone HTTP client to support `--dump headers`, `--tpslimit` etc (Nick Craig-Wood) + * Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood) + * Ignore directory markers (Nick Craig-Wood) + * Stop Mkdir attempting to create existing containers (Nick Craig-Wood) +* B2 + * cleanup: will remove unfinished large files >24hrs old (Garry McNulty) + * For a bucket limited application key check the bucket name (Nick Craig-Wood) + * before this, rclone would use the authorised bucket regardless of what you put on the command line + * Added `--b2-disable-checksum` flag (Wojciech Smigielski) + * this enables large files to be uploaded without a SHA-1 hash for speed reasons +* Drive + * Set default pacer to 100ms for 10 tps (Nick Craig-Wood) + * This fits the Google defaults much better and reduces the 403 errors massively + * Add `--drive-pacer-min-sleep` and `--drive-pacer-burst` to control the pacer + * Improve ChangeNotify support for items with multiple parents (Fabian Möller) + * Fix ListR for items with multiple parents - this fixes oddities with `vfs/refresh` (Fabian Möller) + * Fix using `--drive-impersonate` and appfolders (Nick Craig-Wood) + * Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood) +* Dropbox + * Retry-After support for Dropbox backend (Mathieu Carbou) +* FTP + * Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood) + * helps with indefinite hangs on some FTP servers +* Google Cloud Storage + * Update google cloud storage endpoints (weetmuts) +* HTTP + * Add an example with username and password which is supported but wasn't documented (Nick Craig-Wood) + * Fix backend with `--files-from` and non-existent files (Nick Craig-Wood) +* Hubic + * Make error message more informative if authentication fails (Nick Craig-Wood) +* Jottacloud + * Resume and deduplication support (Oliver Heyme) + * Use token auth for all API requests Don't store password anymore (Sebastian Bünger) + * Add support for 2-factor authentification (Sebastian Bünger) +* Mega + * Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood) + * Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood) + * Add new error codes for better error reporting (Nick Craig-Wood) +* Onedrive + * Fix broken support for "shared with me" folders (Alex Chen) + * Fix root ID not normalised (Cnly) + * Return err instead of panic on unknown-sized uploads (Cnly) +* Qingstor + * Fix go routine leak on multipart upload errors (Nick Craig-Wood) + * Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood) + * Default `--qingstor-upload-concurrency` to 1 to work around bug (Nick Craig-Wood) +* S3 + * Implement `--s3-upload-cutoff` for single part uploads below this (Nick Craig-Wood) + * Change `--s3-upload-concurrency` default to 4 to increase perfomance (Nick Craig-Wood) + * Add `--s3-bucket-acl` to control bucket ACL (Nick Craig-Wood) + * Auto detect region for buckets on operation failure (Nick Craig-Wood) + * Add GLACIER storage class (William Cocker) + * Add Scaleway to s3 documentation (Rémy Léone) + * Add AWS endpoint eu-north-1 (weetmuts) +* SFTP + * Add support for PEM encrypted private keys (Fabian Möller) + * Add option to force the usage of an ssh-agent (Fabian Möller) + * Perform environment variable expansion on key-file (Fabian Möller) + * Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood) + * Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood) + * Fix error on dangling symlinks (Nick Craig-Wood) +* Swift + * Add `--swift-no-chunk` to disable segmented uploads in rcat/mount (Nick Craig-Wood) + * Introduce application credential auth support (kayrus) + * Fix memory usage by slimming Object (Nick Craig-Wood) + * Fix extra requests on upload (Nick Craig-Wood) + * Fix reauth on big files (Nick Craig-Wood) +* Union + * Fix poll-interval not working (Nick Craig-Wood) +* WebDAV + * Support About which means rclone mount will show the correct disk size (Nick Craig-Wood) + * Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood) + * Fail soft on time parsing errors (Nick Craig-Wood) + * Fix infinite loop on failed directory creation (Nick Craig-Wood) + * Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood) + * Fix upload of 0 length files on some servers (Nick Craig-Wood) + * Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood) + ## v1.45 - 2018-11-24 * New backends diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index a79ba4147..d1e6e4759 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone" slug: rclone url: /commands/rclone/ @@ -26,283 +26,301 @@ rclone [flags] ### Options ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - -h, --help help for rclone - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - -V, --version Print the version number - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + -h, --help help for rclone + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + -V, --version Print the version number + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO @@ -355,4 +373,4 @@ rclone [flags] * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone version](/commands/rclone_version/) - Show the version number. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_about.md b/docs/content/commands/rclone_about.md index 48211895d..a4e105aa1 100644 --- a/docs/content/commands/rclone_about.md +++ b/docs/content/commands/rclone_about.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone about" slug: rclone_about url: /commands/rclone_about/ @@ -69,285 +69,303 @@ rclone about remote: [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md index d5c7a9fb6..f617339db 100644 --- a/docs/content/commands/rclone_authorize.md +++ b/docs/content/commands/rclone_authorize.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone authorize" slug: rclone_authorize url: /commands/rclone_authorize/ @@ -28,285 +28,303 @@ rclone authorize [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_cachestats.md b/docs/content/commands/rclone_cachestats.md index 2c18ae5fa..dd15b7763 100644 --- a/docs/content/commands/rclone_cachestats.md +++ b/docs/content/commands/rclone_cachestats.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone cachestats" slug: rclone_cachestats url: /commands/rclone_cachestats/ @@ -27,285 +27,303 @@ rclone cachestats source: [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md index 424a408a2..be35ac273 100644 --- a/docs/content/commands/rclone_cat.md +++ b/docs/content/commands/rclone_cat.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone cat" slug: rclone_cat url: /commands/rclone_cat/ @@ -49,285 +49,303 @@ rclone cat remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md index 559b085f3..223ecb0a0 100644 --- a/docs/content/commands/rclone_check.md +++ b/docs/content/commands/rclone_check.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone check" slug: rclone_check url: /commands/rclone_check/ @@ -43,285 +43,303 @@ rclone check source:path dest:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md index 3b90cf6cc..acf468ef6 100644 --- a/docs/content/commands/rclone_cleanup.md +++ b/docs/content/commands/rclone_cleanup.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone cleanup" slug: rclone_cleanup url: /commands/rclone_cleanup/ @@ -28,285 +28,303 @@ rclone cleanup remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md index 2748bede0..b718cf9a7 100644 --- a/docs/content/commands/rclone_config.md +++ b/docs/content/commands/rclone_config.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config" slug: rclone_config url: /commands/rclone_config/ @@ -28,281 +28,299 @@ rclone config [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO @@ -318,4 +336,4 @@ rclone config [flags] * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md index 027e4e1fd..c0aecfeef 100644 --- a/docs/content/commands/rclone_config_create.md +++ b/docs/content/commands/rclone_config_create.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config create" slug: rclone_config_create url: /commands/rclone_config_create/ @@ -19,6 +19,15 @@ you would do: rclone config create myremote swift env_auth true +Note that if the config process would normally ask a question the +default is taken. Each time that happens rclone will print a message +saying how to affect the value taken. + +So for example if you wanted to configure a Google Drive remote but +using remote authorization you would do this: + + rclone config create mydrive drive config_is_local false + ``` rclone config create [ ]* [flags] @@ -33,285 +42,303 @@ rclone config create [ ]* [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_delete.md b/docs/content/commands/rclone_config_delete.md index 74bc62fc4..15273f2c8 100644 --- a/docs/content/commands/rclone_config_delete.md +++ b/docs/content/commands/rclone_config_delete.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config delete" slug: rclone_config_delete url: /commands/rclone_config_delete/ @@ -25,285 +25,303 @@ rclone config delete [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_dump.md b/docs/content/commands/rclone_config_dump.md index 387d911a5..bb785d724 100644 --- a/docs/content/commands/rclone_config_dump.md +++ b/docs/content/commands/rclone_config_dump.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config dump" slug: rclone_config_dump url: /commands/rclone_config_dump/ @@ -25,285 +25,303 @@ rclone config dump [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_edit.md b/docs/content/commands/rclone_config_edit.md index e41065a48..12155573e 100644 --- a/docs/content/commands/rclone_config_edit.md +++ b/docs/content/commands/rclone_config_edit.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config edit" slug: rclone_config_edit url: /commands/rclone_config_edit/ @@ -28,285 +28,303 @@ rclone config edit [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_file.md b/docs/content/commands/rclone_config_file.md index 9769a899e..65d6da2e0 100644 --- a/docs/content/commands/rclone_config_file.md +++ b/docs/content/commands/rclone_config_file.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config file" slug: rclone_config_file url: /commands/rclone_config_file/ @@ -25,285 +25,303 @@ rclone config file [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_password.md b/docs/content/commands/rclone_config_password.md index fcf888ec4..5a1432679 100644 --- a/docs/content/commands/rclone_config_password.md +++ b/docs/content/commands/rclone_config_password.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config password" slug: rclone_config_password url: /commands/rclone_config_password/ @@ -32,285 +32,303 @@ rclone config password [ ]+ [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_providers.md b/docs/content/commands/rclone_config_providers.md index b21df8b0a..e78af07f4 100644 --- a/docs/content/commands/rclone_config_providers.md +++ b/docs/content/commands/rclone_config_providers.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config providers" slug: rclone_config_providers url: /commands/rclone_config_providers/ @@ -25,285 +25,303 @@ rclone config providers [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_show.md b/docs/content/commands/rclone_config_show.md index 571ba7d60..bd12b7bca 100644 --- a/docs/content/commands/rclone_config_show.md +++ b/docs/content/commands/rclone_config_show.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config show" slug: rclone_config_show url: /commands/rclone_config_show/ @@ -25,285 +25,303 @@ rclone config show [] [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md index 21837d398..656efdcac 100644 --- a/docs/content/commands/rclone_config_update.md +++ b/docs/content/commands/rclone_config_update.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone config update" slug: rclone_config_update url: /commands/rclone_config_update/ @@ -18,6 +18,11 @@ For example to update the env_auth field of a remote of name myremote you would rclone config update myremote swift env_auth true +If the remote uses oauth the token will be updated, if you don't +require this add an extra parameter thus: + + rclone config update myremote swift env_auth true config_refresh_token false + ``` rclone config update [ ]+ [flags] @@ -32,285 +37,303 @@ rclone config update [ ]+ [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index ea3cb04ae..6bcd0c5d3 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone copy" slug: rclone_copy url: /commands/rclone_copy/ @@ -47,6 +47,17 @@ written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination. +See the [--no-traverse](/docs/#no-traverse) option for controlling +whether rclone lists the destination directory or not. Supplying this +option when copying a small number of files into a large destination +can speed transfers up greatly. + +For example, if you have many files in /path/to/src but only a few of +them change every day, you can to copy all the files which have +changed recently very efficiently like this: + + rclone copy --max-age 24h --no-traverse /path/to/src remote: + **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics @@ -63,285 +74,303 @@ rclone copy source:path dest:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index 583e2f039..61e82575b 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone copyto" slug: rclone_copyto url: /commands/rclone_copyto/ @@ -53,285 +53,303 @@ rclone copyto source:path dest:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md index f06df4758..b8ff856c5 100644 --- a/docs/content/commands/rclone_copyurl.md +++ b/docs/content/commands/rclone_copyurl.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone copyurl" slug: rclone_copyurl url: /commands/rclone_copyurl/ @@ -28,285 +28,303 @@ rclone copyurl https://example.com dest:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md index fb85b5fc3..527e01d86 100644 --- a/docs/content/commands/rclone_cryptcheck.md +++ b/docs/content/commands/rclone_cryptcheck.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone cryptcheck" slug: rclone_cryptcheck url: /commands/rclone_cryptcheck/ @@ -53,285 +53,303 @@ rclone cryptcheck remote:path cryptedremote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md index e7a49e4fc..f01703320 100644 --- a/docs/content/commands/rclone_cryptdecode.md +++ b/docs/content/commands/rclone_cryptdecode.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone cryptdecode" slug: rclone_cryptdecode url: /commands/rclone_cryptdecode/ @@ -37,285 +37,303 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md index 8da564ad1..4cc8968de 100644 --- a/docs/content/commands/rclone_dbhashsum.md +++ b/docs/content/commands/rclone_dbhashsum.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone dbhashsum" slug: rclone_dbhashsum url: /commands/rclone_dbhashsum/ @@ -30,285 +30,303 @@ rclone dbhashsum remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md index 983ecc969..16fef1f38 100644 --- a/docs/content/commands/rclone_dedupe.md +++ b/docs/content/commands/rclone_dedupe.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone dedupe" slug: rclone_dedupe url: /commands/rclone_dedupe/ @@ -106,285 +106,303 @@ rclone dedupe [mode] remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md index 9d4687d56..f4869d9cc 100644 --- a/docs/content/commands/rclone_delete.md +++ b/docs/content/commands/rclone_delete.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone delete" slug: rclone_delete url: /commands/rclone_delete/ @@ -46,285 +46,303 @@ rclone delete remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_deletefile.md b/docs/content/commands/rclone_deletefile.md index 241d1e575..27a15cd79 100644 --- a/docs/content/commands/rclone_deletefile.md +++ b/docs/content/commands/rclone_deletefile.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone deletefile" slug: rclone_deletefile url: /commands/rclone_deletefile/ @@ -29,285 +29,303 @@ rclone deletefile remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md index 6f381e2cb..c7962f628 100644 --- a/docs/content/commands/rclone_genautocomplete.md +++ b/docs/content/commands/rclone_genautocomplete.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone genautocomplete" slug: rclone_genautocomplete url: /commands/rclone_genautocomplete/ @@ -24,281 +24,299 @@ Run with --help to list the supported shells. ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO @@ -307,4 +325,4 @@ Run with --help to list the supported shells. * [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_genautocomplete_bash.md b/docs/content/commands/rclone_genautocomplete_bash.md index a1efd2a66..1ecafa57e 100644 --- a/docs/content/commands/rclone_genautocomplete_bash.md +++ b/docs/content/commands/rclone_genautocomplete_bash.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone genautocomplete bash" slug: rclone_genautocomplete_bash url: /commands/rclone_genautocomplete_bash/ @@ -40,285 +40,303 @@ rclone genautocomplete bash [output_file] [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_genautocomplete_zsh.md b/docs/content/commands/rclone_genautocomplete_zsh.md index 71cec3ab2..eace78138 100644 --- a/docs/content/commands/rclone_genautocomplete_zsh.md +++ b/docs/content/commands/rclone_genautocomplete_zsh.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone genautocomplete zsh" slug: rclone_genautocomplete_zsh url: /commands/rclone_genautocomplete_zsh/ @@ -40,285 +40,303 @@ rclone genautocomplete zsh [output_file] [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md index 0baf2074b..ad8959594 100644 --- a/docs/content/commands/rclone_gendocs.md +++ b/docs/content/commands/rclone_gendocs.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone gendocs" slug: rclone_gendocs url: /commands/rclone_gendocs/ @@ -28,285 +28,303 @@ rclone gendocs output_directory [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md index f6583cf3d..41a602106 100644 --- a/docs/content/commands/rclone_hashsum.md +++ b/docs/content/commands/rclone_hashsum.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone hashsum" slug: rclone_hashsum url: /commands/rclone_hashsum/ @@ -42,285 +42,303 @@ rclone hashsum remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_link.md b/docs/content/commands/rclone_link.md index c51e3758d..08c4497d5 100644 --- a/docs/content/commands/rclone_link.md +++ b/docs/content/commands/rclone_link.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone link" slug: rclone_link url: /commands/rclone_link/ @@ -35,285 +35,303 @@ rclone link remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md index 753c332d2..5e0312825 100644 --- a/docs/content/commands/rclone_listremotes.md +++ b/docs/content/commands/rclone_listremotes.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone listremotes" slug: rclone_listremotes url: /commands/rclone_listremotes/ @@ -24,291 +24,309 @@ rclone listremotes [flags] ``` -h, --help help for listremotes - -l, --long Show the type as well as names. + --long Show the type as well as names. ``` ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md index 07e4e1291..0275ebc4b 100644 --- a/docs/content/commands/rclone_ls.md +++ b/docs/content/commands/rclone_ls.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone ls" slug: rclone_ls url: /commands/rclone_ls/ @@ -59,285 +59,303 @@ rclone ls remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md index 7d278db5d..2649869d8 100644 --- a/docs/content/commands/rclone_lsd.md +++ b/docs/content/commands/rclone_lsd.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone lsd" slug: rclone_lsd url: /commands/rclone_lsd/ @@ -70,285 +70,303 @@ rclone lsd remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md index 7dca13987..24f7307b1 100644 --- a/docs/content/commands/rclone_lsf.md +++ b/docs/content/commands/rclone_lsf.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone lsf" slug: rclone_lsf url: /commands/rclone_lsf/ @@ -148,285 +148,303 @@ rclone lsf remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md index 5f83b9456..71f4c5970 100644 --- a/docs/content/commands/rclone_lsjson.md +++ b/docs/content/commands/rclone_lsjson.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone lsjson" slug: rclone_lsjson url: /commands/rclone_lsjson/ @@ -42,7 +42,13 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. -The time is in RFC3339 format with nanosecond precision. +The time is in RFC3339 format with up to nanosecond precision. The +number of decimal digits in the seconds will depend on the precision +that the remote can hold the times, so if times are accurate to the +nearest millisecond (eg Google Drive) then 3 digits will always be +shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are +accurate to the nearest second (Dropbox, Box, WebDav etc) no digits +will be shown ("2017-05-31T16:15:57+01:00"). The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. @@ -88,285 +94,303 @@ rclone lsjson remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md index fd2dc75bc..8a9218bf0 100644 --- a/docs/content/commands/rclone_lsl.md +++ b/docs/content/commands/rclone_lsl.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone lsl" slug: rclone_lsl url: /commands/rclone_lsl/ @@ -59,285 +59,303 @@ rclone lsl remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md index 82890f6d1..3dbc47140 100644 --- a/docs/content/commands/rclone_md5sum.md +++ b/docs/content/commands/rclone_md5sum.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone md5sum" slug: rclone_md5sum url: /commands/rclone_md5sum/ @@ -28,285 +28,303 @@ rclone md5sum remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md index e1eae6334..c6c2f4f10 100644 --- a/docs/content/commands/rclone_mkdir.md +++ b/docs/content/commands/rclone_mkdir.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone mkdir" slug: rclone_mkdir url: /commands/rclone_mkdir/ @@ -25,285 +25,303 @@ rclone mkdir remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index c6bcffaa7..c01d8ae15 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone mount" slug: rclone_mount url: /commands/rclone_mount/ @@ -213,6 +213,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -228,6 +229,11 @@ closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + #### --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -292,318 +298,339 @@ rclone mount remote:path /path/to/mountpoint [flags] ### Options ``` - --allow-non-empty Allow mounting over a non-empty directory. - --allow-other Allow access to other users. - --allow-root Allow access to root user. - --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) - --daemon Run mount as a daemon (background mode). - --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). - --debug-fuse Debug the FUSE internals - needs -v. - --default-permissions Makes kernel enforce access control based on the file mode. - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for mount - --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) - --volname string Set the volume name (not supported by all OSes). - --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. + --allow-non-empty Allow mounting over a non-empty directory. + --allow-other Allow access to other users. + --allow-root Allow access to root user. + --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) + --daemon Run mount as a daemon (background mode). + --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes). + --debug-fuse Debug the FUSE internals - needs -v. + --default-permissions Makes kernel enforce access control based on the file mode. + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for mount + --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + -o, --option stringArray Option for libfuse/WinFsp. Repeat if required. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --volname string Set the volume name (not supported by all OSes). + --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. ``` ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index cd69594d4..60d98e479 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone move" slug: rclone_move url: /commands/rclone_move/ @@ -27,6 +27,11 @@ into `dest:path` then delete the original (if no errors on copy) in If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. +See the [--no-traverse](/docs/#no-traverse) option for controlling +whether rclone lists the destination directory or not. Supplying this +option when moving a small number of files into a large destination +can speed transfers up greatly. + **Important**: Since this can cause data loss, test first with the --dry-run flag. @@ -47,285 +52,303 @@ rclone move source:path dest:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index dd93ebba8..222755a6c 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone moveto" slug: rclone_moveto url: /commands/rclone_moveto/ @@ -56,285 +56,303 @@ rclone moveto source:path dest:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md index 307563f7a..6c953e662 100644 --- a/docs/content/commands/rclone_ncdu.md +++ b/docs/content/commands/rclone_ncdu.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone ncdu" slug: rclone_ncdu url: /commands/rclone_ncdu/ @@ -56,285 +56,303 @@ rclone ncdu remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md index 35aa1b38b..ec54774c1 100644 --- a/docs/content/commands/rclone_obscure.md +++ b/docs/content/commands/rclone_obscure.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone obscure" slug: rclone_obscure url: /commands/rclone_obscure/ @@ -25,285 +25,303 @@ rclone obscure password [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md index 28e0fa15c..e6714f4fc 100644 --- a/docs/content/commands/rclone_purge.md +++ b/docs/content/commands/rclone_purge.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone purge" slug: rclone_purge url: /commands/rclone_purge/ @@ -29,285 +29,303 @@ rclone purge remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_rc.md b/docs/content/commands/rclone_rc.md index 42097665f..e823dea33 100644 --- a/docs/content/commands/rclone_rc.md +++ b/docs/content/commands/rclone_rc.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone rc" slug: rclone_rc url: /commands/rclone_rc/ @@ -50,285 +50,303 @@ rclone rc commands parameter [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md index ff0af9b99..7b80d567c 100644 --- a/docs/content/commands/rclone_rcat.md +++ b/docs/content/commands/rclone_rcat.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone rcat" slug: rclone_rcat url: /commands/rclone_rcat/ @@ -47,285 +47,303 @@ rclone rcat remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_rcd.md b/docs/content/commands/rclone_rcd.md index b7f401470..a4d740992 100644 --- a/docs/content/commands/rclone_rcd.md +++ b/docs/content/commands/rclone_rcd.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone rcd" slug: rclone_rcd url: /commands/rclone_rcd/ @@ -11,7 +11,7 @@ Run rclone listening to remote control commands only. ### Synopsis -This runs rclone so that it only listents to remote control commands. +This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. @@ -35,285 +35,303 @@ rclone rcd * [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md index 0c13ce03f..1bac50654 100644 --- a/docs/content/commands/rclone_rmdir.md +++ b/docs/content/commands/rclone_rmdir.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone rmdir" slug: rclone_rmdir url: /commands/rclone_rmdir/ @@ -27,285 +27,303 @@ rclone rmdir remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md index 88ea09fd9..2c05d3816 100644 --- a/docs/content/commands/rclone_rmdirs.md +++ b/docs/content/commands/rclone_rmdirs.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone rmdirs" slug: rclone_rmdirs url: /commands/rclone_rmdirs/ @@ -35,285 +35,303 @@ rclone rmdirs remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md index 632dc6c35..7903005d7 100644 --- a/docs/content/commands/rclone_serve.md +++ b/docs/content/commands/rclone_serve.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone serve" slug: rclone_serve url: /commands/rclone_serve/ @@ -31,289 +31,308 @@ rclone serve [opts] [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. +* [rclone serve dlna](/commands/rclone_serve_dlna/) - Serve remote:path over DLNA * [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP. * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md new file mode 100644 index 000000000..6d3327604 --- /dev/null +++ b/docs/content/commands/rclone_serve_dlna.md @@ -0,0 +1,495 @@ +--- +date: 2019-02-09T10:42:18Z +title: "rclone serve dlna" +slug: rclone_serve_dlna +url: /commands/rclone_serve_dlna/ +--- +## rclone serve dlna + +Serve remote:path over DLNA + +### Synopsis + +rclone serve dlna is a DLNA media server for media stored in a rclone remote. Many +devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN +and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast +packets (SSDP) and will thus only work on LANs. + +Rclone will list all files present in the remote, without filtering based on media formats or +file extensions. Additionally, there is no media transcoding support. This means that some +players might show files that they are not able to play back correctly. + + +### Server options + +Use --addr to specify which IP address and port the server should +listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all +IPs. + + +### Directory Cache + +Using the `--dir-cache-time` flag, you can set how long a +directory should be considered up to date and not refreshed from the +backend. Changes made locally in the mount may appear immediately or +invalidate the cache. However, changes done on the remote will only +be picked up once the cache expires. + +Alternatively, you can send a `SIGHUP` signal to rclone for +it to flush all directory caches, regardless of how old they are. +Assuming only one rclone instance is running, you can reset the cache +like this: + + kill -SIGHUP $(pidof rclone) + +If you configure rclone with a [remote control](/rc) then you can use +rclone rc to flush the whole directory cache: + + rclone rc vfs/forget + +Or individual files or directories: + + rclone rc vfs/forget file=path/to/file dir=path/to/dir + +### File Buffering + +The `--buffer-size` flag determines the amount of memory, +that will be used to buffer data in advance. + +Each open file descriptor will try to keep the specified amount of +data in memory at all times. The buffered data is bound to one file +descriptor and won't be shared between multiple open file descriptors +of the same file. + +This flag is a upper limit for the used memory per file descriptor. +The buffer will only use memory for data that is downloaded but not +not yet read. If the buffer is empty, only a small amount of memory +will be used. +The maximum memory used by rclone for buffering can be up to +`--buffer-size * open files`. + +### File Caching + +These flags control the VFS file caching options. The VFS layer is +used by rclone mount to make a cloud storage system work more like a +normal file system. + +You'll need to enable VFS caching if you want, for example, to read +and write simultaneously to a file. See below for more details. + +Note that the VFS cache works in addition to the cache backend and you +may find that you need one or the other or both. + + --cache-dir string Directory rclone will use for caching. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) + +If run with `-vv` rclone will print the location of the file cache. The +files are stored in the user cache file area which is OS dependent but +can be controlled with `--cache-dir` or setting the appropriate +environment variable. + +The cache has 4 different modes selected by `--vfs-cache-mode`. +The higher the cache mode the more compatible rclone becomes at the +cost of using disk space. + +Note that files are written back to the remote only when they are +closed so if rclone is quit or dies with open files then these won't +get written back to the remote. However they will still be in the on +disk cache. + +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + +#### --vfs-cache-mode off + +In this mode the cache will read directly from the remote and write +directly to the remote without caching anything on disk. + +This will mean some operations are not possible + + * Files can't be opened for both read AND write + * Files opened for write can't be seeked + * Existing files opened for write must have O_TRUNC set + * Files open for read with O_TRUNC will be opened write only + * Files open for write only will behave as if O_TRUNC was supplied + * Open modes O_APPEND, O_TRUNC are ignored + * If an upload fails it can't be retried + +#### --vfs-cache-mode minimal + +This is very similar to "off" except that files opened for read AND +write will be buffered to disks. This means that files opened for +write will be a lot more compatible, but uses the minimal disk space. + +These operations are not possible + + * Files opened for write only can't be seeked + * Existing files opened for write must have O_TRUNC set + * Files opened for write only will ignore O_APPEND, O_TRUNC + * If an upload fails it can't be retried + +#### --vfs-cache-mode writes + +In this mode files opened for read only are still read directly from +the remote, write only and read/write files are buffered to disk +first. + +This mode should support all normal file system operations. + +If an upload fails it will be retried up to --low-level-retries times. + +#### --vfs-cache-mode full + +In this mode all reads and writes are buffered to and from disk. When +a file is opened for read it will be downloaded in its entirety first. + +This may be appropriate for your needs, or you may prefer to look at +the cache backend which does a much more sophisticated job of caching, +including caching directory hierarchies and chunks of files. + +In this mode, unlike the others, when a file is written to the disk, +it will be kept on the disk after it is written to the remote. It +will be purged on a schedule according to `--vfs-cache-max-age`. + +This mode should support all normal file system operations. + +If an upload or download fails it will be retried up to +--low-level-retries times. + + +``` +rclone serve dlna remote:path [flags] +``` + +### Options + +``` + --addr string ip:port or :port to bind the DLNA http server to. (default ":7879") + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for dlna + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) +``` + +### Options inherited from parent commands + +``` + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. +``` + +### SEE ALSO + +* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md index e8f762f0d..9413eb3f2 100644 --- a/docs/content/commands/rclone_serve_ftp.md +++ b/docs/content/commands/rclone_serve_ftp.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone serve ftp" slug: rclone_serve_ftp url: /commands/rclone_serve_ftp/ @@ -88,6 +88,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -103,6 +104,11 @@ closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + #### --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -167,309 +173,330 @@ rclone serve ftp remote:path [flags] ### Options ``` - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for ftp - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. (empty value allow every password) - --passive-port string Passive port range to use. (default "30000-32000") - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. (default "anonymous") - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for ftp + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. (empty value allow every password) + --passive-port string Passive port range to use. (default "30000-32000") + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. (default "anonymous") + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md index 644bfb17c..9b0f8443d 100644 --- a/docs/content/commands/rclone_serve_http.md +++ b/docs/content/commands/rclone_serve_http.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone serve http" slug: rclone_serve_http url: /commands/rclone_serve_http/ @@ -129,6 +129,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -144,6 +145,11 @@ closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + #### --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -208,316 +214,337 @@ rclone serve http remote:path [flags] ### Options ``` - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") - --cert string SSL PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for http - --htpasswd string htpasswd file - if not provided no authentication is done - --key string SSL PEM Private key - --max-header-bytes int Maximum size of request header (default 4096) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --realm string realm for authentication (default "rclone") - --server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --cert string SSL PEM key (concatenation of certificate and CA certificate) + --client-ca string Client certificate authority to verify clients with + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for http + --htpasswd string htpasswd file - if not provided no authentication is done + --key string SSL PEM Private key + --max-header-bytes int Maximum size of request header (default 4096) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --realm string realm for authentication (default "rclone") + --server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md index bb4967cb0..c49bc782d 100644 --- a/docs/content/commands/rclone_serve_restic.md +++ b/docs/content/commands/rclone_serve_restic.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone serve restic" slug: rclone_serve_restic url: /commands/rclone_serve_restic/ @@ -161,285 +161,303 @@ rclone serve restic remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md index f839579c0..e2b05ddb1 100644 --- a/docs/content/commands/rclone_serve_webdav.md +++ b/docs/content/commands/rclone_serve_webdav.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone serve webdav" slug: rclone_serve_webdav url: /commands/rclone_serve_webdav/ @@ -137,6 +137,7 @@ may find that you need one or the other or both. --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-cache-max-size int Max total size of objects in the cache. (default off) If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -152,6 +153,11 @@ closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. +If using --vfs-cache-max-size note that the cache may exceed this size +for two reasons. Firstly because it is only checked every +--vfs-cache-poll-interval. Secondly because open files cannot be +evicted from the cache. + #### --vfs-cache-mode off In this mode the cache will read directly from the remote and write @@ -216,317 +222,338 @@ rclone serve webdav remote:path [flags] ### Options ``` - --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") - --cert string SSL PEM key (concatenation of certificate and CA certificate) - --client-ca string Client certificate authority to verify clients with - --dir-cache-time duration Time to cache directory entries for. (default 5m0s) - --etag-hash string Which hash to use for the ETag, or auto or blank for off - --gid uint32 Override the gid field set by the filesystem. (default 502) - -h, --help help for webdav - --htpasswd string htpasswd file - if not provided no authentication is done - --key string SSL PEM Private key - --max-header-bytes int Maximum size of request header (default 4096) - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --pass string Password for authentication. - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) - --read-only Mount read-only. - --realm string realm for authentication (default "rclone") - --server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --uid uint32 Override the uid field set by the filesystem. (default 502) - --umask int Override the permission bits set by the filesystem. (default 2) - --user string User name for authentication. - --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) - --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) - --vfs-read-chunk-size int Read the source objects in chunks. (default 128M) - --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) + --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --cert string SSL PEM key (concatenation of certificate and CA certificate) + --client-ca string Client certificate authority to verify clients with + --dir-cache-time duration Time to cache directory entries for. (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --etag-hash string Which hash to use for the ETag, or auto or blank for off + --file-perms FileMode File permissions (default 0666) + --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for webdav + --htpasswd string htpasswd file - if not provided no authentication is done + --key string SSL PEM Private key + --max-header-bytes int Maximum size of request header (default 4096) + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --pass string Password for authentication. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) + --read-only Mount read-only. + --realm string realm for authentication (default "rclone") + --server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --uid uint32 Override the uid field set by the filesystem. (default 502) + --umask int Override the permission bits set by the filesystem. (default 2) + --user string User name for authentication. + --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_settier.md b/docs/content/commands/rclone_settier.md index 43621aa22..25823dc58 100644 --- a/docs/content/commands/rclone_settier.md +++ b/docs/content/commands/rclone_settier.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone settier" slug: rclone_settier url: /commands/rclone_settier/ @@ -47,285 +47,303 @@ rclone settier tier remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md index 71977512d..b22b591b0 100644 --- a/docs/content/commands/rclone_sha1sum.md +++ b/docs/content/commands/rclone_sha1sum.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone sha1sum" slug: rclone_sha1sum url: /commands/rclone_sha1sum/ @@ -28,285 +28,303 @@ rclone sha1sum remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md index 2963861db..0d8795f20 100644 --- a/docs/content/commands/rclone_size.md +++ b/docs/content/commands/rclone_size.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone size" slug: rclone_size url: /commands/rclone_size/ @@ -26,285 +26,303 @@ rclone size remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index ae863a24a..5298c143a 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone sync" slug: rclone_sync url: /commands/rclone_sync/ @@ -46,285 +46,303 @@ rclone sync source:path dest:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_touch.md b/docs/content/commands/rclone_touch.md index dde27f007..1178772db 100644 --- a/docs/content/commands/rclone_touch.md +++ b/docs/content/commands/rclone_touch.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone touch" slug: rclone_touch url: /commands/rclone_touch/ @@ -27,285 +27,303 @@ rclone touch remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md index 2873634ad..3f5287a9c 100644 --- a/docs/content/commands/rclone_tree.md +++ b/docs/content/commands/rclone_tree.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone tree" slug: rclone_tree url: /commands/rclone_tree/ @@ -68,285 +68,303 @@ rclone tree remote:path [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md index 714755a21..36a0686a3 100644 --- a/docs/content/commands/rclone_version.md +++ b/docs/content/commands/rclone_version.md @@ -1,5 +1,5 @@ --- -date: 2018-11-24T13:43:29Z +date: 2019-02-09T10:42:18Z title: "rclone version" slug: rclone_version url: /commands/rclone_version/ @@ -53,285 +53,303 @@ rclone version [flags] ### Options inherited from parent commands ``` - --acd-auth-url string Auth server URL. - --acd-client-id string Amazon Application Client ID. - --acd-client-secret string Amazon Application Client Secret. - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) - --acd-token-url string Token server url. - --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) - --alias-remote string Remote or path to alias. - --ask-password Allow prompt for password for encrypted configuration. (default true) - --auto-confirm If enabled, do not request console confirmation. - --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) - --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) - --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) - --azureblob-list-chunk int Size of blob list. (default 5000) - --azureblob-sas-url string SAS URL for container level access only - --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) - --b2-account string Account ID or Application Key ID - --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) - --b2-endpoint string Endpoint for the service. - --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. - --b2-key string Application Key - --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. - --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) - --b2-versions Include old versions in directory listings. - --backup-dir string Make backups into hierarchy based in DIR. - --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. - --box-client-id string Box App Client Id. - --box-client-secret string Box App Client Secret - --box-commit-retries int Max number of times to try committing a multipart file. (default 100) - --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) - --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) - --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. - --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) - --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. - --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") - --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) - --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) - --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") - --cache-db-purge Clear all the cached data for this remote on start. - --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) - --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") - --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) - --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server - --cache-plex-password string The password of the Plex user - --cache-plex-url string The URL of the Plex server - --cache-plex-username string The username of the Plex user - --cache-read-retries int How many times to retry a read from a cache storage. (default 10) - --cache-remote string Remote to cache. - --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) - --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. - --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) - --cache-workers int How many workers should run in parallel to download chunks. (default 4) - --cache-writes Cache file data on writes through the FS - --checkers int Number of checkers to run in parallel. (default 8) - -c, --checksum Skip based on checksum & size, not mod-time & size - --config string Config file. (default "/home/ncw/.rclone.conf") - --contimeout duration Connect timeout (default 1m0s) - -L, --copy-links Follow symlinks and copy the pointed to item. - --cpuprofile string Write cpu profile to file - --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) - --crypt-filename-encryption string How to encrypt the filenames. (default "standard") - --crypt-password string Password or pass phrase for encryption. - --crypt-password2 string Password or pass phrase for salt. Optional but recommended. - --crypt-remote string Remote to encrypt/decrypt. - --crypt-show-mapping For all files listed show how the names encrypt. - --delete-after When synchronizing, delete files on destination after transferring (default) - --delete-before When synchronizing, delete files on destination before transferring - --delete-during When synchronizing, delete files during transfer - --delete-excluded Delete files on dest excluded from sync - --disable string Disable a comma separated list of features. Use help to see a list. - --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. - --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. - --drive-alternate-export Use alternate export URLs for google documents export., - --drive-auth-owner-only Only consider files owned by the authenticated user. - --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) - --drive-client-id string Google Application Client Id - --drive-client-secret string Google Application Client Secret - --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") - --drive-formats string Deprecated: see export_formats - --drive-impersonate string Impersonate this user when using a service account. - --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. - --drive-keep-revision-forever Keep new head revision of each file forever. - --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) - --drive-root-folder-id string ID of the root folder - --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-service-account-credentials string Service Account Credentials JSON blob - --drive-service-account-file string Service Account Credentials JSON file path - --drive-shared-with-me Only show files that are shared with me. - --drive-skip-gdocs Skip google documents in all listings. - --drive-team-drive string ID of the Team Drive - --drive-trashed-only Only show files that are in the trash. - --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) - --drive-use-created-date Use file created date instead of modified date., - --drive-use-trash Send files to the trash instead of deleting permanently. (default true) - --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) - --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) - --dropbox-client-id string Dropbox App Client Id - --dropbox-client-secret string Dropbox App Client Secret - --dropbox-impersonate string Impersonate this user when using a business account. - -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles - --dump-bodies Dump HTTP headers and bodies - may contain sensitive info - --dump-headers Dump HTTP bodies - may contain sensitive info - --exclude stringArray Exclude files matching pattern - --exclude-from stringArray Read exclude patterns from file - --exclude-if-present string Exclude directories if filename is present - --fast-list Use recursive list if available. Uses more memory but fewer transactions. - --files-from stringArray Read list of source-file names from file - -f, --filter stringArray Add a file-filtering rule - --filter-from stringArray Read filtering patterns from a file - --ftp-host string FTP host to connect to - --ftp-pass string FTP password - --ftp-port string FTP port, leave blank to use default (21) - --ftp-user string FTP username, leave blank for current username, $USER - --gcs-bucket-acl string Access Control List for new buckets. - --gcs-client-id string Google Application Client Id - --gcs-client-secret string Google Application Client Secret - --gcs-location string Location for the newly created buckets. - --gcs-object-acl string Access Control List for new objects. - --gcs-project-number string Project number. - --gcs-service-account-file string Service Account Credentials JSON file path - --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. - --http-url string URL of http host to connect to - --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --hubic-client-id string Hubic Client Id - --hubic-client-secret string Hubic Client Secret - --ignore-case Ignore case in filters (case insensitive) - --ignore-checksum Skip post copy check of checksums. - --ignore-errors delete even if there are I/O errors - --ignore-existing Skip all files that exist on destination - --ignore-size Ignore size when skipping use mod-time or checksum. - -I, --ignore-times Don't skip files that match size and time - transfer all files - --immutable Do not modify files. Fail if existing files have been modified. - --include stringArray Include files matching pattern - --include-from stringArray Read include patterns from file - --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. - --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) - --jottacloud-mountpoint string The mountpoint to use. - --jottacloud-pass string Password. - --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. - --jottacloud-user string User Name - --local-no-check-updated Don't check to see if the files change during upload - --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) - --local-nounc string Disable UNC (long path names) conversion on Windows - --log-file string Log everything to this file - --log-format string Comma separated list of log format options (default "date,time") - --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") - --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) - --max-delete int When synchronizing, limit the number of deletes (default -1) - --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) - --max-transfer int Maximum size of data to transfer. (default off) - --mega-debug Output more debug from Mega. - --mega-hard-delete Delete files permanently rather than putting them into the trash. - --mega-pass string Password. - --mega-user string User name - --memprofile string Write memory profile to file - --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) - --modify-window duration Max time diff to be considered the same (default 1ns) - --no-check-certificate Do not verify the server SSL certificate. Insecure. - --no-gzip-encoding Don't set Accept-Encoding: gzip. - --no-traverse Obsolete - does nothing. - --no-update-modtime Don't update destination mod-time if files identical. - -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). - --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) - --onedrive-client-id string Microsoft App Client Id - --onedrive-client-secret string Microsoft App Client Secret - --onedrive-drive-id string The ID of the drive to use - --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) - --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. - --opendrive-password string Password. - --opendrive-username string Username - --pcloud-client-id string Pcloud App Client Id - --pcloud-client-secret string Pcloud App Client Secret - -P, --progress Show progress during transfer. - --qingstor-access-key-id string QingStor Access Key ID - --qingstor-connection-retries int Number of connection retries. (default 3) - --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. - --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. - --qingstor-secret-access-key string QingStor Secret Access Key (password) - --qingstor-zone string Zone to connect to. - -q, --quiet Print as little stuff as possible - --rc Enable the remote control server. - --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") - --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) - --rc-client-ca string Client certificate authority to verify clients with - --rc-files string Path to local files to serve on the HTTP server. - --rc-htpasswd string htpasswd file - if not provided no authentication is done - --rc-key string SSL PEM Private key - --rc-max-header-bytes int Maximum size of request header (default 4096) - --rc-no-auth Don't require auth for certain methods. - --rc-pass string Password for authentication. - --rc-realm string realm for authentication (default "rclone") - --rc-serve Enable the serving of remote objects. - --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) - --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) - --rc-user string User name for authentication. - --retries int Retry operations this many times if they fail (default 3) - --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) - --s3-access-key-id string AWS Access Key ID. - --s3-acl string Canned ACL used when creating buckets and storing or copying objects. - --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) - --s3-disable-checksum Don't store MD5 checksum with object metadata - --s3-endpoint string Endpoint for S3 API. - --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) - --s3-location-constraint string Location constraint - must be set to match the Region. - --s3-provider string Choose your S3 provider. - --s3-region string Region to connect to. - --s3-secret-access-key string AWS Secret Access Key (password) - --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. - --s3-session-token string An AWS session token - --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. - --s3-storage-class string The storage class to use when storing new objects in S3. - --s3-upload-concurrency int Concurrency for multipart uploads. (default 2) - --s3-v2-auth If true use v2 authentication. - --sftp-ask-password Allow asking for SFTP password when needed. - --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. - --sftp-host string SSH host to connect to - --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - --sftp-pass string SSH password, leave blank to use ssh-agent. - --sftp-path-override string Override path used by SSH connection. - --sftp-port string SSH port, leave blank to use default (22) - --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. - --sftp-user string SSH username, leave blank for current username, ncw - --size-only Skip based on size only, not mod-time or checksum - --skip-links Don't warn about skipped symlinks. - --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) - --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) - --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") - --stats-one-line Make the stats fit on one line. - --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") - --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. - --swift-auth string Authentication URL for server (OS_AUTH_URL). - --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) - --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) - --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") - --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. - --swift-key string API key or password (OS_PASSWORD). - --swift-region string Region name - optional (OS_REGION_NAME) - --swift-storage-policy string The storage policy to use when creating a new container - --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) - --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) - --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) - --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) - --swift-user string User name to log in (OS_USERNAME). - --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") - --timeout duration IO idle timeout (default 5m0s) - --tpslimit float Limit HTTP transactions per second to this. - --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) - --track-renames When synchronizing, track file renames and do a server side move if possible - --transfers int Number of file transfers to run in parallel. (default 4) - --union-remotes string List of space separated remotes. - -u, --update Skip files that are newer on the destination. - --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") - -v, --verbose count Print lots more stuff (repeat for more) - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) - --webdav-pass string Password. - --webdav-url string URL of http host to connect to - --webdav-user string User name - --webdav-vendor string Name of the Webdav site/service/software you are using - --yandex-client-id string Yandex Client Id - --yandex-client-secret string Yandex Client Secret - --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ftp-host string FTP host to connect to + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --ignore-case Ignore case in filters (case insensitive) + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-mountpoint string The mountpoint to use. + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --jottacloud-user string User Name: + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + -P, --progress Show progress during transfer. + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + --union-remotes string List of space separated remotes. + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46") + -v, --verbose count Print lots more stuff (repeat for more) + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. ``` ### SEE ALSO * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 24-Nov-2018 +###### Auto generated by spf13/cobra on 9-Feb-2019 diff --git a/docs/content/drive.md b/docs/content/drive.md index 3774c09a7..b4947d2da 100644 --- a/docs/content/drive.md +++ b/docs/content/drive.md @@ -787,6 +787,24 @@ If Object's are greater, use drive v2 API to download. - Type: SizeSuffix - Default: off +#### --drive-pacer-min-sleep + +Minimum time to sleep between API calls. + +- Config: pacer_min_sleep +- Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP +- Type: Duration +- Default: 100ms + +#### --drive-pacer-burst + +Number of API calls to allow without sleeping. + +- Config: pacer_burst +- Env Var: RCLONE_DRIVE_PACER_BURST +- Type: int +- Default: 100 + ### Limitations ### diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md index 36456fef3..cf20d741c 100644 --- a/docs/content/googlecloudstorage.md +++ b/docs/content/googlecloudstorage.md @@ -347,16 +347,26 @@ Location for the newly created buckets. - Multi-regional location for United States. - "asia-east1" - Taiwan. + - "asia-east2" + - Hong Kong. - "asia-northeast1" - Tokyo. + - "asia-south1" + - Mumbai. - "asia-southeast1" - Singapore. - "australia-southeast1" - Sydney. + - "europe-north1" + - Finland. - "europe-west1" - Belgium. - "europe-west2" - London. + - "europe-west3" + - Frankfurt. + - "europe-west4" + - Netherlands. - "us-central1" - Iowa. - "us-east1" @@ -365,6 +375,8 @@ Location for the newly created buckets. - Northern Virginia. - "us-west1" - Oregon. + - "us-west2" + - California. #### --gcs-storage-class diff --git a/docs/content/http.md b/docs/content/http.md index 383a0cdfe..daa077ffc 100644 --- a/docs/content/http.md +++ b/docs/content/http.md @@ -142,5 +142,7 @@ URL of http host to connect to - Examples: - "https://example.com" - Connect to example.com + - "https://user:pass@example.com" + - Connect to example.com using a username and password diff --git a/docs/content/hubic.md b/docs/content/hubic.md index 99b22928f..bc02ee0d4 100644 --- a/docs/content/hubic.md +++ b/docs/content/hubic.md @@ -169,6 +169,24 @@ default for this is 5GB which is its maximum value. - Type: SizeSuffix - Default: 5G +#### --hubic-no-chunk + +Don't chunk files during streaming upload. + +When doing streaming uploads (eg using rcat or mount) setting this +flag will cause the swift backend to not upload chunked files. + +This will limit the maximum upload size to 5GB. However non chunked +files are easier to deal with and have an MD5SUM. + +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. + +- Config: no_chunk +- Env Var: RCLONE_HUBIC_NO_CHUNK +- Type: bool +- Default: false + ### Limitations ### diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md index 3ebdf53c1..c62a2ebab 100644 --- a/docs/content/jottacloud.md +++ b/docs/content/jottacloud.md @@ -131,22 +131,13 @@ Here are the standard options specific to jottacloud (JottaCloud). #### --jottacloud-user -User Name +User Name: - Config: user - Env Var: RCLONE_JOTTACLOUD_USER - Type: string - Default: "" -#### --jottacloud-pass - -Password. - -- Config: pass -- Env Var: RCLONE_JOTTACLOUD_PASS -- Type: string -- Default: "" - #### --jottacloud-mountpoint The mountpoint to use. @@ -193,6 +184,15 @@ Default is false, meaning link command will create or retrieve public link. - Type: bool - Default: false +#### --jottacloud-upload-resume-limit + +Files bigger than this can be resumed if the upload fail's. + +- Config: upload_resume_limit +- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT +- Type: SizeSuffix +- Default: 10M + ### Limitations ### diff --git a/docs/content/local.md b/docs/content/local.md index be5e26e20..5af34cffc 100644 --- a/docs/content/local.md +++ b/docs/content/local.md @@ -259,6 +259,15 @@ Follow symlinks and copy the pointed to item. - Type: bool - Default: false +#### --links + +Translate symlinks to/from regular files with a '.rclonelink' extension + +- Config: links +- Env Var: RCLONE_LOCAL_LINKS +- Type: bool +- Default: false + #### --skip-links Don't warn about skipped symlinks. diff --git a/docs/content/qingstor.md b/docs/content/qingstor.md index 4b740f7f2..fe6b3dc44 100644 --- a/docs/content/qingstor.md +++ b/docs/content/qingstor.md @@ -271,6 +271,9 @@ Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. +NB if you set this to > 1 then the checksums of multpart uploads +become corrupted (the uploads themselves are not corrupted though). + If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. @@ -278,6 +281,6 @@ this may help to speed up the transfers. - Config: upload_concurrency - Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY - Type: int -- Default: 4 +- Default: 1 diff --git a/docs/content/rc.md b/docs/content/rc.md index d544ad01d..a419d57a6 100644 --- a/docs/content/rc.md +++ b/docs/content/rc.md @@ -226,7 +226,7 @@ The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning -of the file to fetch exclisive. +of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file. @@ -477,9 +477,6 @@ This takes the following parameters - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination -This returns -- jobid - ID of async job to query with job/status - Authentication is required for this call. ### operations/copyurl: Copy the URL to the object @@ -557,9 +554,6 @@ This takes the following parameters - dstFs - a remote name string eg "drive2:" for the destination - dstRemote - a path within that remote eg "file2.txt" for the destination -This returns -- jobid - ID of async job to query with job/status - Authentication is required for this call. ### operations/purge: Remove a directory or container and all of its contents @@ -637,6 +631,20 @@ Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this. +For example: + +This sets DEBUG level logs (-vv) + + rclone rc options/set --json '{"main": {"LogLevel": 8}}' + +And this sets INFO level logs (-v) + + rclone rc options/set --json '{"main": {"LogLevel": 7}}' + +And this sets NOTICE level logs (normal without -v) + + rclone rc options/set --json '{"main": {"LogLevel": 6}}' + ### rc/error: This returns an error This returns an error with the input as part of its error string. @@ -668,8 +676,6 @@ This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination -This returns -- jobid - ID of async job to query with job/status See the [copy command](/commands/rclone_copy/) command for more information on the above. @@ -683,8 +689,6 @@ This takes the following parameters - dstFs - a remote name string eg "drive:dst" for the destination - deleteEmptySrcDirs - delete empty src directories if set -This returns -- jobid - ID of async job to query with job/status See the [move command](/commands/rclone_move/) command for more information on the above. @@ -697,8 +701,6 @@ This takes the following parameters - srcFs - a remote name string eg "drive:src" for the source - dstFs - a remote name string eg "drive:dst" for the destination -This returns -- jobid - ID of async job to query with job/status See the [sync command](/commands/rclone_sync/) command for more information on the above. diff --git a/docs/content/s3.md b/docs/content/s3.md index 84de38fd5..05021f78c 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -499,6 +499,9 @@ Region to connect to. - "eu-west-2" - EU (London) Region - Needs location constraint eu-west-2. + - "eu-north-1" + - EU (Stockholm) Region + - Needs location constraint eu-north-1. - "eu-central-1" - EU (Frankfurt) Region - Needs location constraint eu-central-1. @@ -597,9 +600,9 @@ Specify if using an IBM COS On Premise. - "s3.ams-eu-geo.objectstorage.service.networklayer.com" - EU Cross Region Amsterdam Private Endpoint - "s3.eu-gb.objectstorage.softlayer.net" - - Great Britan Endpoint + - Great Britain Endpoint - "s3.eu-gb.objectstorage.service.networklayer.com" - - Great Britan Private Endpoint + - Great Britain Private Endpoint - "s3.ap-geo.objectstorage.softlayer.net" - APAC Cross Regional Endpoint - "s3.tok-ap-geo.objectstorage.softlayer.net" @@ -720,6 +723,8 @@ Used when creating buckets only. - EU (Ireland) Region. - "eu-west-2" - EU (London) Region. + - "eu-north-1" + - EU (Stockholm) Region. - "EU" - EU Region. - "ap-southeast-1" @@ -762,7 +767,7 @@ For on-prem COS, do not make a selection from this list, hit enter - "us-east-flex" - US East Region Flex - "us-south-standard" - - US Sout hRegion Standard + - US South Region Standard - "us-south-vault" - US South Region Vault - "us-south-cold" @@ -778,13 +783,13 @@ For on-prem COS, do not make a selection from this list, hit enter - "eu-flex" - EU Cross Region Flex - "eu-gb-standard" - - Great Britan Standard + - Great Britain Standard - "eu-gb-vault" - - Great Britan Vault + - Great Britain Vault - "eu-gb-cold" - - Great Britan Cold + - Great Britain Cold - "eu-gb-flex" - - Great Britan Flex + - Great Britain Flex - "ap-standard" - APAC Standard - "ap-vault" @@ -824,6 +829,8 @@ Leave blank if not sure. Used when creating buckets only. Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 @@ -919,17 +926,43 @@ The storage class to use when storing new objects in OSS. - Type: string - Default: "" - Examples: - - "Standard" + - "" + - Default + - "STANDARD" - Standard storage class - - "Archive" + - "GLACIER" - Archive storage mode. - - "IA" + - "STANDARD_IA" - Infrequent access storage mode. ### Advanced Options Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)). +#### --s3-bucket-acl + +Canned ACL used when creating buckets. + +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + +Note that this ACL is applied when only when creating buckets. If it +isn't set then "acl" is used instead. + +- Config: bucket_acl +- Env Var: RCLONE_S3_BUCKET_ACL +- Type: string +- Default: "" +- Examples: + - "private" + - Owner gets FULL_CONTROL. No one else has access rights (default). + - "public-read" + - Owner gets FULL_CONTROL. The AllUsers group gets READ access. + - "public-read-write" + - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. + - Granting this on a bucket is generally not recommended. + - "authenticated-read" + - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. + #### --s3-upload-cutoff Cutoff for switching to chunked upload diff --git a/docs/content/swift.md b/docs/content/swift.md index e135c597f..ce809c8fd 100644 --- a/docs/content/swift.md +++ b/docs/content/swift.md @@ -329,33 +329,6 @@ User ID to log in - optional - most swift systems use user and leave this blank - Type: string - Default: "" -#### --swift-application-credential-id - -Application Credential ID to log in - optional (v3 auth) (OS_APPLICATION_CREDENTIAL_ID). - -- Config: application_credential_id -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID -- Type: string -- Default: "" - -#### --swift-application-credential-name - -Application Credential name to log in - optional (v3 auth) (OS_APPLICATION_CREDENTIAL_NAME). - -- Config: application_credential_name -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME -- Type: string -- Default: "" - -#### --swift-application-credential-secret - -Application Credential secret to log in - optional (v3 auth) (OS_APPLICATION_CREDENTIAL_SECRET). - -- Config: application_credential_secret -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET -- Type: string -- Default: "" - #### --swift-domain User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) @@ -419,6 +392,33 @@ Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) - Type: string - Default: "" +#### --swift-application-credential-id + +Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + +- Config: application_credential_id +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID +- Type: string +- Default: "" + +#### --swift-application-credential-name + +Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + +- Config: application_credential_name +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME +- Type: string +- Default: "" + +#### --swift-application-credential-secret + +Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + +- Config: application_credential_secret +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET +- Type: string +- Default: "" + #### --swift-auth-version AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) @@ -481,6 +481,24 @@ default for this is 5GB which is its maximum value. - Type: SizeSuffix - Default: 5G +#### --swift-no-chunk + +Don't chunk files during streaming upload. + +When doing streaming uploads (eg using rcat or mount) setting this +flag will cause the swift backend to not upload chunked files. + +This will limit the maximum upload size to 5GB. However non chunked +files are easier to deal with and have an MD5SUM. + +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. + +- Config: no_chunk +- Env Var: RCLONE_SWIFT_NO_CHUNK +- Type: bool +- Default: false + ### Modified time ### diff --git a/docs/layouts/partials/version.html b/docs/layouts/partials/version.html index b62d7bb3b..c158ec352 100644 --- a/docs/layouts/partials/version.html +++ b/docs/layouts/partials/version.html @@ -1 +1 @@ -v1.45 \ No newline at end of file +v1.46 \ No newline at end of file diff --git a/fs/version.go b/fs/version.go index 793a52737..b0988033e 100644 --- a/fs/version.go +++ b/fs/version.go @@ -1,4 +1,4 @@ package fs // Version of rclone -var Version = "v1.45-DEV" +var Version = "v1.46" diff --git a/rclone.1 b/rclone.1 index a57e92a52..558e52fe3 100644 --- a/rclone.1 +++ b/rclone.1 @@ -1,7 +1,7 @@ .\"t .\" Automatically generated by Pandoc 1.19.2.4 .\" -.TH "rclone" "1" "Nov 24, 2018" "User Manual" "" +.TH "rclone" "1" "Feb 09, 2019" "User Manual" "" .hy .SH Rclone .PP @@ -10,6 +10,8 @@ Rclone is a command line program to sync files and directories to and from: .IP \[bu] 2 +Alibaba Cloud (Aliyun) Object Storage System (OSS) +.IP \[bu] 2 Amazon Drive (See note (/amazonclouddrive/#status)) .IP \[bu] 2 Amazon S3 @@ -70,6 +72,8 @@ QingStor .IP \[bu] 2 Rackspace Cloud Files .IP \[bu] 2 +Scaleway +.IP \[bu] 2 SFTP .IP \[bu] 2 Wasabi @@ -474,6 +478,21 @@ directory". This applies to all commands and whether you are talking about the source or destination. .PP +See the \-\-no\-traverse (/docs/#no-traverse) option for controlling +whether rclone lists the destination directory or not. +Supplying this option when copying a small number of files into a large +destination can speed transfers up greatly. +.PP +For example, if you have many files in /path/to/src but only a few of +them change every day, you can to copy all the files which have changed +recently very efficiently like this: +.IP +.nf +\f[C] +rclone\ copy\ \-\-max\-age\ 24h\ \-\-no\-traverse\ /path/to/src\ remote: +\f[] +.fi +.PP \f[B]Note\f[]: Use the \f[C]\-P\f[]/\f[C]\-\-progress\f[] flag to view real\-time transfer statistics .IP @@ -552,6 +571,11 @@ original (if no errors on copy) in \f[C]source:path\f[]. If you want to delete empty source directories after move, use the \-\-delete\-empty\-src\-dirs flag. .PP +See the \-\-no\-traverse (/docs/#no-traverse) option for controlling +whether rclone lists the destination directory or not. +Supplying this option when moving a small number of files into a large +destination can speed transfers up greatly. +.PP \f[B]Important\f[]: Since this can cause data loss, test first with the \-\-dry\-run flag. .PP @@ -1378,6 +1402,20 @@ you would do: rclone\ config\ create\ myremote\ swift\ env_auth\ true \f[] .fi +.PP +Note that if the config process would normally ask a question the +default is taken. +Each time that happens rclone will print a message saying how to affect +the value taken. +.PP +So for example if you wanted to configure a Google Drive remote but +using remote authorization you would do this: +.IP +.nf +\f[C] +rclone\ config\ create\ mydrive\ drive\ config_is_local\ false +\f[] +.fi .IP .nf \f[C] @@ -1551,6 +1589,15 @@ you would do: rclone\ config\ update\ myremote\ swift\ env_auth\ true \f[] .fi +.PP +If the remote uses oauth the token will be updated, if you don\[aq]t +require this add an extra parameter thus: +.IP +.nf +\f[C] +rclone\ config\ update\ myremote\ swift\ env_auth\ true\ config_refresh_token\ false +\f[] +.fi .IP .nf \f[C] @@ -1977,7 +2024,7 @@ rclone\ listremotes\ [flags] .nf \f[C] \ \ \-h,\ \-\-help\ \ \ help\ for\ listremotes -\ \ \-l,\ \-\-long\ \ \ Show\ the\ type\ as\ well\ as\ names. +\ \ \ \ \ \ \-\-long\ \ \ Show\ the\ type\ as\ well\ as\ names. \f[] .fi .SS rclone lsf @@ -2191,7 +2238,13 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for When used without \-\-recursive the Path will always be the same as Name. .PP -The time is in RFC3339 format with nanosecond precision. +The time is in RFC3339 format with up to nanosecond precision. +The number of decimal digits in the seconds will depend on the precision +that the remote can hold the times, so if times are accurate to the +nearest millisecond (eg Google Drive) then 3 digits will always be shown +("2017\-05\-31T16:15:57.034+01:00") whereas if the times are accurate to +the nearest second (Dropbox, Box, WebDav etc) no digits will be shown +("2017\-05\-31T16:15:57+01:00"). .PP The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. @@ -2482,6 +2535,7 @@ may find that you need one or the other or both. \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\-\-vfs\-cache\-max\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) \f[] .fi .PP @@ -2500,6 +2554,11 @@ Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won\[aq]t get written back to the remote. However they will still be in the on disk cache. +.PP +If using \-\-vfs\-cache\-max\-size note that the cache may exceed this +size for two reasons. +Firstly because it is only checked every \-\-vfs\-cache\-poll\-interval. +Secondly because open files cannot be evicted from the cache. .SS \-\-vfs\-cache\-mode off .PP In this mode the cache will read directly from the remote and write @@ -2574,34 +2633,37 @@ rclone\ mount\ remote:path\ /path/to/mountpoint\ [flags] .IP .nf \f[C] -\ \ \ \ \ \ \-\-allow\-non\-empty\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ mounting\ over\ a\ non\-empty\ directory. -\ \ \ \ \ \ \-\-allow\-other\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ other\ users. -\ \ \ \ \ \ \-\-allow\-root\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ root\ user. -\ \ \ \ \ \ \-\-attr\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ for\ which\ file/directory\ attributes\ are\ cached.\ (default\ 1s) -\ \ \ \ \ \ \-\-daemon\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Run\ mount\ as\ a\ daemon\ (background\ mode). -\ \ \ \ \ \ \-\-daemon\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ limit\ for\ rclone\ to\ respond\ to\ kernel\ (not\ supported\ by\ all\ OSes). -\ \ \ \ \ \ \-\-debug\-fuse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Debug\ the\ FUSE\ internals\ \-\ needs\ \-v. -\ \ \ \ \ \ \-\-default\-permissions\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ enforce\ access\ control\ based\ on\ the\ file\ mode. -\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) -\ \ \ \ \ \ \-\-fuse\-flag\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ Flags\ or\ arguments\ to\ be\ passed\ direct\ to\ libfuse/WinFsp.\ Repeat\ if\ required. -\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) -\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ mount -\ \ \ \ \ \ \-\-max\-read\-ahead\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ number\ of\ bytes\ that\ can\ be\ prefetched\ for\ sequential\ reads.\ (default\ 128k) -\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. -\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). -\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. -\ \ \-o,\ \-\-option\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Option\ for\ libfuse/WinFsp.\ Repeat\ if\ required. -\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) -\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. -\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) -\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem. -\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) -\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") -\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) -\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) -\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ int\ \ \ \ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) -\ \ \ \ \ \ \-\-volname\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ the\ volume\ name\ (not\ supported\ by\ all\ OSes). -\ \ \ \ \ \ \-\-write\-back\-cache\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ buffer\ writes\ before\ sending\ them\ to\ rclone.\ Without\ this,\ writethrough\ caching\ is\ used. +\ \ \ \ \ \ \-\-allow\-non\-empty\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ mounting\ over\ a\ non\-empty\ directory. +\ \ \ \ \ \ \-\-allow\-other\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ other\ users. +\ \ \ \ \ \ \-\-allow\-root\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ root\ user. +\ \ \ \ \ \ \-\-attr\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ for\ which\ file/directory\ attributes\ are\ cached.\ (default\ 1s) +\ \ \ \ \ \ \-\-daemon\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Run\ mount\ as\ a\ daemon\ (background\ mode). +\ \ \ \ \ \ \-\-daemon\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ limit\ for\ rclone\ to\ respond\ to\ kernel\ (not\ supported\ by\ all\ OSes). +\ \ \ \ \ \ \-\-debug\-fuse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Debug\ the\ FUSE\ internals\ \-\ needs\ \-v. +\ \ \ \ \ \ \-\-default\-permissions\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ enforce\ access\ control\ based\ on\ the\ file\ mode. +\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) +\ \ \ \ \ \ \-\-dir\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ permissions\ (default\ 0777) +\ \ \ \ \ \ \-\-file\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ File\ permissions\ (default\ 0666) +\ \ \ \ \ \ \-\-fuse\-flag\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Flags\ or\ arguments\ to\ be\ passed\ direct\ to\ libfuse/WinFsp.\ Repeat\ if\ required. +\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ mount +\ \ \ \ \ \ \-\-max\-read\-ahead\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ number\ of\ bytes\ that\ can\ be\ prefetched\ for\ sequential\ reads.\ (default\ 128k) +\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. +\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). +\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. +\ \ \-o,\ \-\-option\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Option\ for\ libfuse/WinFsp.\ Repeat\ if\ required. +\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. +\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem. +\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-vfs\-cache\-max\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-mode\ CacheMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ \ \ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) +\ \ \ \ \ \ \-\-volname\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ the\ volume\ name\ (not\ supported\ by\ all\ OSes). +\ \ \ \ \ \ \-\-write\-back\-cache\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ buffer\ writes\ before\ sending\ them\ to\ rclone.\ Without\ this,\ writethrough\ caching\ is\ used. \f[] .fi .SS rclone moveto @@ -2828,7 +2890,7 @@ rclone\ rcat\ remote:path\ [flags] Run rclone listening to remote control commands only. .SS Synopsis .PP -This runs rclone so that it only listents to remote control commands. +This runs rclone so that it only listens to remote control commands. .PP This is useful if you are controlling rclone via the rc API. .PP @@ -2908,6 +2970,220 @@ rclone\ serve\ \ [opts]\ \ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ serve \f[] .fi +.SS rclone serve dlna +.PP +Serve remote:path over DLNA +.SS Synopsis +.PP +rclone serve dlna is a DLNA media server for media stored in a rclone +remote. +Many devices, such as the Xbox and PlayStation, can automatically +discover this server in the LAN and play audio/video from it. +VLC is also supported. +Service discovery uses UDP multicast packets (SSDP) and will thus only +work on LANs. +.PP +Rclone will list all files present in the remote, without filtering +based on media formats or file extensions. +Additionally, there is no media transcoding support. +This means that some players might show files that they are not able to +play back correctly. +.SS Server options +.PP +Use \-\-addr to specify which IP address and port the server should +listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all +IPs. +.SS Directory Cache +.PP +Using the \f[C]\-\-dir\-cache\-time\f[] flag, you can set how long a +directory should be considered up to date and not refreshed from the +backend. +Changes made locally in the mount may appear immediately or invalidate +the cache. +However, changes done on the remote will only be picked up once the +cache expires. +.PP +Alternatively, you can send a \f[C]SIGHUP\f[] signal to rclone for it to +flush all directory caches, regardless of how old they are. +Assuming only one rclone instance is running, you can reset the cache +like this: +.IP +.nf +\f[C] +kill\ \-SIGHUP\ $(pidof\ rclone) +\f[] +.fi +.PP +If you configure rclone with a remote control (/rc) then you can use +rclone rc to flush the whole directory cache: +.IP +.nf +\f[C] +rclone\ rc\ vfs/forget +\f[] +.fi +.PP +Or individual files or directories: +.IP +.nf +\f[C] +rclone\ rc\ vfs/forget\ file=path/to/file\ dir=path/to/dir +\f[] +.fi +.SS File Buffering +.PP +The \f[C]\-\-buffer\-size\f[] flag determines the amount of memory, that +will be used to buffer data in advance. +.PP +Each open file descriptor will try to keep the specified amount of data +in memory at all times. +The buffered data is bound to one file descriptor and won\[aq]t be +shared between multiple open file descriptors of the same file. +.PP +This flag is a upper limit for the used memory per file descriptor. +The buffer will only use memory for data that is downloaded but not not +yet read. +If the buffer is empty, only a small amount of memory will be used. +The maximum memory used by rclone for buffering can be up to +\f[C]\-\-buffer\-size\ *\ open\ files\f[]. +.SS File Caching +.PP +These flags control the VFS file caching options. +The VFS layer is used by rclone mount to make a cloud storage system +work more like a normal file system. +.PP +You\[aq]ll need to enable VFS caching if you want, for example, to read +and write simultaneously to a file. +See below for more details. +.PP +Note that the VFS cache works in addition to the cache backend and you +may find that you need one or the other or both. +.IP +.nf +\f[C] +\-\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching. +\-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) +\-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") +\-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\-\-vfs\-cache\-max\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) +\f[] +.fi +.PP +If run with \f[C]\-vv\f[] rclone will print the location of the file +cache. +The files are stored in the user cache file area which is OS dependent +but can be controlled with \f[C]\-\-cache\-dir\f[] or setting the +appropriate environment variable. +.PP +The cache has 4 different modes selected by +\f[C]\-\-vfs\-cache\-mode\f[]. +The higher the cache mode the more compatible rclone becomes at the cost +of using disk space. +.PP +Note that files are written back to the remote only when they are closed +so if rclone is quit or dies with open files then these won\[aq]t get +written back to the remote. +However they will still be in the on disk cache. +.PP +If using \-\-vfs\-cache\-max\-size note that the cache may exceed this +size for two reasons. +Firstly because it is only checked every \-\-vfs\-cache\-poll\-interval. +Secondly because open files cannot be evicted from the cache. +.SS \-\-vfs\-cache\-mode off +.PP +In this mode the cache will read directly from the remote and write +directly to the remote without caching anything on disk. +.PP +This will mean some operations are not possible +.IP \[bu] 2 +Files can\[aq]t be opened for both read AND write +.IP \[bu] 2 +Files opened for write can\[aq]t be seeked +.IP \[bu] 2 +Existing files opened for write must have O_TRUNC set +.IP \[bu] 2 +Files open for read with O_TRUNC will be opened write only +.IP \[bu] 2 +Files open for write only will behave as if O_TRUNC was supplied +.IP \[bu] 2 +Open modes O_APPEND, O_TRUNC are ignored +.IP \[bu] 2 +If an upload fails it can\[aq]t be retried +.SS \-\-vfs\-cache\-mode minimal +.PP +This is very similar to "off" except that files opened for read AND +write will be buffered to disks. +This means that files opened for write will be a lot more compatible, +but uses the minimal disk space. +.PP +These operations are not possible +.IP \[bu] 2 +Files opened for write only can\[aq]t be seeked +.IP \[bu] 2 +Existing files opened for write must have O_TRUNC set +.IP \[bu] 2 +Files opened for write only will ignore O_APPEND, O_TRUNC +.IP \[bu] 2 +If an upload fails it can\[aq]t be retried +.SS \-\-vfs\-cache\-mode writes +.PP +In this mode files opened for read only are still read directly from the +remote, write only and read/write files are buffered to disk first. +.PP +This mode should support all normal file system operations. +.PP +If an upload fails it will be retried up to \-\-low\-level\-retries +times. +.SS \-\-vfs\-cache\-mode full +.PP +In this mode all reads and writes are buffered to and from disk. +When a file is opened for read it will be downloaded in its entirety +first. +.PP +This may be appropriate for your needs, or you may prefer to look at the +cache backend which does a much more sophisticated job of caching, +including caching directory hierarchies and chunks of files. +.PP +In this mode, unlike the others, when a file is written to the disk, it +will be kept on the disk after it is written to the remote. +It will be purged on a schedule according to +\f[C]\-\-vfs\-cache\-max\-age\f[]. +.PP +This mode should support all normal file system operations. +.PP +If an upload or download fails it will be retried up to +\-\-low\-level\-retries times. +.IP +.nf +\f[C] +rclone\ serve\ dlna\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ip:port\ or\ :port\ to\ bind\ the\ DLNA\ http\ server\ to.\ (default\ ":7879") +\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) +\ \ \ \ \ \ \-\-dir\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ permissions\ (default\ 0777) +\ \ \ \ \ \ \-\-file\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ File\ permissions\ (default\ 0666) +\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ dlna +\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. +\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). +\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. +\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. +\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2) +\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-vfs\-cache\-max\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-mode\ CacheMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ \ \ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) +\f[] +.fi .SS rclone serve ftp .PP Serve remote:path over FTP. @@ -3005,6 +3281,7 @@ may find that you need one or the other or both. \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\-\-vfs\-cache\-max\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) \f[] .fi .PP @@ -3023,6 +3300,11 @@ Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won\[aq]t get written back to the remote. However they will still be in the on disk cache. +.PP +If using \-\-vfs\-cache\-max\-size note that the cache may exceed this +size for two reasons. +Firstly because it is only checked every \-\-vfs\-cache\-poll\-interval. +Secondly because open files cannot be evicted from the cache. .SS \-\-vfs\-cache\-mode off .PP In this mode the cache will read directly from the remote and write @@ -3097,25 +3379,28 @@ rclone\ serve\ ftp\ remote:path\ [flags] .IP .nf \f[C] -\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:2121") -\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) -\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) -\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ ftp -\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. -\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). -\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. -\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication.\ (empty\ value\ allow\ every\ password) -\ \ \ \ \ \ \-\-passive\-port\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Passive\ port\ range\ to\ use.\ (default\ "30000\-32000") -\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) -\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. -\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) -\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2) -\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication.\ (default\ "anonymous") -\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) -\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") -\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) -\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) -\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ int\ \ \ \ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) +\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:2121") +\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) +\ \ \ \ \ \ \-\-dir\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ permissions\ (default\ 0777) +\ \ \ \ \ \ \-\-file\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ File\ permissions\ (default\ 0666) +\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ ftp +\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. +\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). +\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. +\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication.\ (empty\ value\ allow\ every\ password) +\ \ \ \ \ \ \-\-passive\-port\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Passive\ port\ range\ to\ use.\ (default\ "30000\-32000") +\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. +\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2) +\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication.\ (default\ "anonymous") +\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-vfs\-cache\-max\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-mode\ CacheMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ \ \ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) \f[] .fi .SS rclone serve http @@ -3262,6 +3547,7 @@ may find that you need one or the other or both. \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\-\-vfs\-cache\-max\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) \f[] .fi .PP @@ -3280,6 +3566,11 @@ Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won\[aq]t get written back to the remote. However they will still be in the on disk cache. +.PP +If using \-\-vfs\-cache\-max\-size note that the cache may exceed this +size for two reasons. +Firstly because it is only checked every \-\-vfs\-cache\-poll\-interval. +Secondly because open files cannot be evicted from the cache. .SS \-\-vfs\-cache\-mode off .PP In this mode the cache will read directly from the remote and write @@ -3354,32 +3645,35 @@ rclone\ serve\ http\ remote:path\ [flags] .IP .nf \f[C] -\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080") -\ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) -\ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with -\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) -\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) -\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ http -\ \ \ \ \ \ \-\-htpasswd\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ htpasswd\ file\ \-\ if\ not\ provided\ no\ authentication\ is\ done -\ \ \ \ \ \ \-\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ Private\ key -\ \ \ \ \ \ \-\-max\-header\-bytes\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ request\ header\ (default\ 4096) -\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. -\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). -\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. -\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication. -\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) -\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. -\ \ \ \ \ \ \-\-realm\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ realm\ for\ authentication\ (default\ "rclone") -\ \ \ \ \ \ \-\-server\-read\-timeout\ duration\ \ \ \ \ \ \ Timeout\ for\ server\ reading\ data\ (default\ 1h0m0s) -\ \ \ \ \ \ \-\-server\-write\-timeout\ duration\ \ \ \ \ \ Timeout\ for\ server\ writing\ data\ (default\ 1h0m0s) -\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) -\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2) -\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication. -\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) -\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") -\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) -\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) -\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ int\ \ \ \ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) +\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080") +\ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) +\ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with +\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) +\ \ \ \ \ \ \-\-dir\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ permissions\ (default\ 0777) +\ \ \ \ \ \ \-\-file\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ File\ permissions\ (default\ 0666) +\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ http +\ \ \ \ \ \ \-\-htpasswd\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ htpasswd\ file\ \-\ if\ not\ provided\ no\ authentication\ is\ done +\ \ \ \ \ \ \-\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ Private\ key +\ \ \ \ \ \ \-\-max\-header\-bytes\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ request\ header\ (default\ 4096) +\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. +\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). +\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. +\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication. +\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. +\ \ \ \ \ \ \-\-realm\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ realm\ for\ authentication\ (default\ "rclone") +\ \ \ \ \ \ \-\-server\-read\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ Timeout\ for\ server\ reading\ data\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-server\-write\-timeout\ duration\ \ \ \ \ \ \ \ \ \ Timeout\ for\ server\ writing\ data\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2) +\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication. +\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-vfs\-cache\-max\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-mode\ CacheMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ \ \ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) \f[] .fi .SS rclone serve restic @@ -3697,6 +3991,7 @@ may find that you need one or the other or both. \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\-\-vfs\-cache\-max\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) \f[] .fi .PP @@ -3715,6 +4010,11 @@ Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won\[aq]t get written back to the remote. However they will still be in the on disk cache. +.PP +If using \-\-vfs\-cache\-max\-size note that the cache may exceed this +size for two reasons. +Firstly because it is only checked every \-\-vfs\-cache\-poll\-interval. +Secondly because open files cannot be evicted from the cache. .SS \-\-vfs\-cache\-mode off .PP In this mode the cache will read directly from the remote and write @@ -3789,33 +4089,36 @@ rclone\ serve\ webdav\ remote:path\ [flags] .IP .nf \f[C] -\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080") -\ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) -\ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with -\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) -\ \ \ \ \ \ \-\-etag\-hash\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Which\ hash\ to\ use\ for\ the\ ETag,\ or\ auto\ or\ blank\ for\ off -\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) -\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ webdav -\ \ \ \ \ \ \-\-htpasswd\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ htpasswd\ file\ \-\ if\ not\ provided\ no\ authentication\ is\ done -\ \ \ \ \ \ \-\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ Private\ key -\ \ \ \ \ \ \-\-max\-header\-bytes\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ request\ header\ (default\ 4096) -\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. -\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). -\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. -\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication. -\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) -\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. -\ \ \ \ \ \ \-\-realm\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ realm\ for\ authentication\ (default\ "rclone") -\ \ \ \ \ \ \-\-server\-read\-timeout\ duration\ \ \ \ \ \ \ Timeout\ for\ server\ reading\ data\ (default\ 1h0m0s) -\ \ \ \ \ \ \-\-server\-write\-timeout\ duration\ \ \ \ \ \ Timeout\ for\ server\ writing\ data\ (default\ 1h0m0s) -\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) -\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2) -\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication. -\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) -\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off") -\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) -\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) -\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ int\ \ \ \ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) +\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080") +\ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) +\ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with +\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) +\ \ \ \ \ \ \-\-dir\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ permissions\ (default\ 0777) +\ \ \ \ \ \ \-\-etag\-hash\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Which\ hash\ to\ use\ for\ the\ ETag,\ or\ auto\ or\ blank\ for\ off +\ \ \ \ \ \ \-\-file\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ File\ permissions\ (default\ 0666) +\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ webdav +\ \ \ \ \ \ \-\-htpasswd\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ htpasswd\ file\ \-\ if\ not\ provided\ no\ authentication\ is\ done +\ \ \ \ \ \ \-\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ Private\ key +\ \ \ \ \ \ \-\-max\-header\-bytes\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ request\ header\ (default\ 4096) +\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. +\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). +\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. +\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication. +\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. +\ \ \ \ \ \ \-\-realm\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ realm\ for\ authentication\ (default\ "rclone") +\ \ \ \ \ \ \-\-server\-read\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ Timeout\ for\ server\ reading\ data\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-server\-write\-timeout\ duration\ \ \ \ \ \ \ \ \ \ Timeout\ for\ server\ writing\ data\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2) +\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication. +\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-vfs\-cache\-max\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ Max\ total\ size\ of\ objects\ in\ the\ cache.\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-mode\ CacheMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ off) +\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ \ \ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M) +\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) \f[] .fi .SS rclone settier @@ -4160,6 +4463,17 @@ rclone\ sync\ /path/to/files\ remote:current\-backup .PP Rclone has a number of options to control its behaviour. .PP +Options that take parameters can have the values passed in two ways, +\f[C]\-\-option=value\f[] or \f[C]\-\-option\ value\f[]. +However boolean (true/false) options behave slightly differently to the +other options in that \f[C]\-\-boolean\f[] sets the option to +\f[C]true\f[] and the absence of the flag sets it to \f[C]false\f[]. +It is also possible to specify \f[C]\-\-boolean=false\f[] or +\f[C]\-\-boolean=true\f[]. +Note that \f[C]\-\-boolean\ false\f[] is not valid \- this is parsed as +\f[C]\-\-boolean\f[] and the \f[C]false\f[] is parsed as an extra +command line argument for rclone. +.PP Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "\-1.5h" or @@ -4305,6 +4619,9 @@ See the mount (/commands/rclone_mount/#file-buffering) documentation for more details. .PP Set to 0 to disable the buffering for the minimum memory usage. +.PP +Note that the memory allocation of the buffers is influenced by the +\-\-use\-mmap (#use-mmap) flag. .SS \-\-checkers=N .PP The number of checkers to run in parallel. @@ -4343,9 +4660,8 @@ created with an older version). If \f[C]$XDG_CONFIG_HOME\f[] is set it will be at \f[C]$XDG_CONFIG_HOME/rclone/rclone.conf\f[] .PP -If you run \f[C]rclone\ \-h\f[] and look at the help for the -\f[C]\-\-config\f[] option you will see where the default location is -for you. +If you run \f[C]rclone\ config\ file\f[] you will see where the default +location is for you. .PP Use this flag to override the config location, eg \f[C]rclone\ \-\-config=".myconfig"\ .config\f[]. @@ -4775,7 +5091,8 @@ to the console. Note: Encrypted destinations are not supported by \f[C]\-\-track\-renames\f[]. .PP -Note that \f[C]\-\-track\-renames\f[] uses extra memory to keep track of +Note that \f[C]\-\-track\-renames\f[] is incompatible with +\f[C]\-\-no\-traverse\f[] and that it uses extra memory to keep track of all the rename candidates. .PP Note also that \f[C]\-\-track\-renames\f[] is incompatible with @@ -4879,6 +5196,20 @@ This can be useful when transferring to a remote which doesn\[aq]t support mod times directly as it is more accurate than a \f[C]\-\-size\-only\f[] check and faster than using \f[C]\-\-checksum\f[]. +.SS \-\-use\-mmap +.PP +If this flag is set then rclone will use anonymous memory allocated by +mmap on Unix based platforms and VirtualAlloc on Windows for its +transfer buffers (size controlled by \f[C]\-\-buffer\-size\f[]). +Memory allocated like this does not go on the Go heap and can be +returned to the OS immediately when it is finished with. +.PP +If this flag is not set then rclone will allocate and free the buffers +using the Go memory allocator which may use more memory as memory pages +are returned less aggressively to the OS. +.PP +It is possible this does not work well on all platforms so it is +disabled by default; in the future it may be enabled by default. .SS \-\-use\-server\-modtime .PP Some object\-store backends (e.g, Swift, S3) do not preserve file @@ -5083,6 +5414,26 @@ In this mode, TLS is susceptible to man\-in\-the\-middle attacks. This option defaults to \f[C]false\f[]. .PP \f[B]This should be used only for testing.\f[] +.SS \-\-no\-traverse +.PP +The \f[C]\-\-no\-traverse\f[] flag controls whether the destination file +system is traversed when using the \f[C]copy\f[] or \f[C]move\f[] +commands. +\f[C]\-\-no\-traverse\f[] is not compatible with \f[C]sync\f[] and will +be ignored if you supply it with \f[C]sync\f[]. +.PP +If you are only copying a small number of files (or are filtering most +of the files) and/or have a large number of files on the destination +then \f[C]\-\-no\-traverse\f[] will stop rclone listing the destination +and save time. +.PP +However, if you are copying a large number of files, especially if you +are doing a copy where lots of the files under consideration haven\[aq]t +changed and won\[aq]t need copying then you shouldn\[aq]t use +\f[C]\-\-no\-traverse\f[]. +.PP +See rclone copy (https://rclone.org/commands/rclone_copy/) for an +example of how to use it. .SS Filtering .PP For the filtering options @@ -5348,21 +5699,20 @@ rclone\ config .PP to set up the config file. .PP -Find the config file by running \f[C]rclone\ \-h\f[] and looking for the -help for the \f[C]\-\-config\f[] option +Find the config file by running \f[C]rclone\ config\ file\f[], for +example .IP .nf \f[C] -$\ rclone\ \-h -[snip] -\ \ \ \ \ \ \-\-config="/home/user/.rclone.conf":\ Config\ file. -[snip] +$\ rclone\ config\ file +Configuration\ file\ is\ stored\ at: +/home/user/.rclone.conf \f[] .fi .PP Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and -place it in the correct place (use \f[C]rclone\ \-h\f[] on the remote -box to find out where). +place it in the correct place (use \f[C]rclone\ config\ file\f[] on the +remote box to find out where). .SH Filtering, includes and excludes .PP Rclone has a sophisticated set of include and exclude rules. @@ -6226,7 +6576,7 @@ The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch -exclisive. +exclusive. Both values can be negative, in which case they count from the back of the file. The value "\-5:" represents the last 5 chunks of a file. @@ -6491,8 +6841,6 @@ dstFs \- a remote name string eg "drive2:" for the destination dstRemote \- a path within that remote eg "file2.txt" for the destination .PP -This returns \- jobid \- ID of async job to query with job/status -.PP Authentication is required for this call. .SS operations/copyurl: Copy the URL to the object .PP @@ -6590,8 +6938,6 @@ dstFs \- a remote name string eg "drive2:" for the destination dstRemote \- a path within that remote eg "file2.txt" for the destination .PP -This returns \- jobid \- ID of async job to query with job/status -.PP Authentication is required for this call. .SS operations/purge: Remove a directory or container and all of its contents @@ -6671,6 +7017,32 @@ Repeated as often as required. Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this. +.PP +For example: +.PP +This sets DEBUG level logs (\-vv) +.IP +.nf +\f[C] +rclone\ rc\ options/set\ \-\-json\ \[aq]{"main":\ {"LogLevel":\ 8}}\[aq] +\f[] +.fi +.PP +And this sets INFO level logs (\-v) +.IP +.nf +\f[C] +rclone\ rc\ options/set\ \-\-json\ \[aq]{"main":\ {"LogLevel":\ 7}}\[aq] +\f[] +.fi +.PP +And this sets NOTICE level logs (normal without \-v) +.IP +.nf +\f[C] +rclone\ rc\ options/set\ \-\-json\ \[aq]{"main":\ {"LogLevel":\ 6}}\[aq] +\f[] +.fi .SS rc/error: This returns an error .PP This returns an error with the input as part of its error string. @@ -6701,8 +7073,6 @@ srcFs \- a remote name string eg "drive:src" for the source .IP \[bu] 2 dstFs \- a remote name string eg "drive:dst" for the destination .PP -This returns \- jobid \- ID of async job to query with job/status -.PP See the copy command (https://rclone.org/commands/rclone_copy/) command for more information on the above. .PP @@ -6717,8 +7087,6 @@ dstFs \- a remote name string eg "drive:dst" for the destination .IP \[bu] 2 deleteEmptySrcDirs \- delete empty src directories if set .PP -This returns \- jobid \- ID of async job to query with job/status -.PP See the move command (https://rclone.org/commands/rclone_move/) command for more information on the above. .PP @@ -6731,8 +7099,6 @@ srcFs \- a remote name string eg "drive:src" for the source .IP \[bu] 2 dstFs \- a remote name string eg "drive:dst" for the destination .PP -This returns \- jobid \- ID of async job to query with job/status -.PP See the sync command (https://rclone.org/commands/rclone_sync/) command for more information on the above. .PP @@ -7336,9 +7702,9 @@ T} T{ WebDAV T}@T{ -\- +MD5, SHA1 †† T}@T{ -Yes †† +Yes ††† T}@T{ Depends T}@T{ @@ -7391,7 +7757,9 @@ This is an SHA256 sum of all the 4MB block SHA256s. \f[C]md5sum\f[] or \f[C]sha1sum\f[] as well as \f[C]echo\f[] are in the remote\[aq]s PATH. .PP -†† WebDAV supports modtimes when used with Owncloud and Nextcloud only. +†† WebDAV supports hashes when used with Owncloud and Nextcloud only. +.PP +††† WebDAV supports modtimes when used with Owncloud and Nextcloud only. .PP ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft\[aq]s own @@ -7906,7 +8274,7 @@ Yes ‡ T}@T{ No #2178 (https://github.com/ncw/rclone/issues/2178) T}@T{ -No +Yes T} T{ Yandex Disk @@ -8019,6 +8387,9 @@ account on the particular cloud provider. This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash. .PP +This is also used to return the space used, available for +\f[C]rclone\ mount\f[]. +.PP If the server can\[aq]t do \f[C]About\f[] then \f[C]rclone\ about\f[] will return an error. .SS Alias @@ -8520,6 +8891,8 @@ The S3 backend can be used with a number of different providers: .IP \[bu] 2 AWS S3 .IP \[bu] 2 +Alibaba Cloud (Aliyun) Object Storage System (OSS) +.IP \[bu] 2 Ceph .IP \[bu] 2 DigitalOcean Spaces @@ -8754,6 +9127,8 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "STANDARD_IA" \ 5\ /\ One\ Zone\ Infrequent\ Access\ storage\ class \ \ \ \\\ "ONEZONE_IA" +\ 6\ /\ Glacier\ storage\ class +\ \ \ \\\ "GLACIER" storage_class>\ 1 Remote\ config \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- @@ -8805,8 +9180,35 @@ to 1 ns. .PP rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. +.PP Note that files uploaded \f[I]both\f[] with multipart upload \f[I]and\f[] through crypt remotes do not have MD5 sums. +.PP +Rclone switches from single part uploads to multipart uploads at the +point specified by \f[C]\-\-s3\-upload\-cutoff\f[]. +This can be a maximum of 5GB and a minimum of 0 (ie always upload +mulipart files). +.PP +The chunk sizes used in the multipart upload are specified by +\f[C]\-\-s3\-chunk\-size\f[] and the number of chunks uploaded +concurrently is specified by \f[C]\-\-s3\-upload\-concurrency\f[]. +.PP +Multipart uploads will use \f[C]\-\-transfers\f[] * +\f[C]\-\-s3\-upload\-concurrency\f[] * \f[C]\-\-s3\-chunk\-size\f[] +extra memory. +Single part uploads to not use extra memory. +.PP +Single part transfers can be faster than multipart transfers or slower +depending on your latency from S3 \- the more latency, the more likely +single part transfers will be faster. +.PP +Increasing \f[C]\-\-s3\-upload\-concurrency\f[] will increase throughput +(8 would be a sensible value) and increasing +\f[C]\-\-s3\-chunk\-size\f[] also increases througput (16M would be +sensible). +Increasing either of these will use more memory. +The default values are high enough to gain most of the possible +performance without using too much memory. .SS Buckets and Regions .PP With Amazon S3 you can list buckets (\f[C]rclone\ lsd\f[]) using any @@ -8934,10 +9336,12 @@ A proper fix is being worked on in issue #1824 (https://github.com/ncw/rclone/issues/1824). .SS Glacier .PP -You can transition objects to glacier storage using a lifecycle +You can upload objects using the glacier storage class or transition +them to glacier using a lifecycle policy (http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). The bucket can still be synced or copied into normally, but if rclone -tries to access the data you will see an error like below. +tries to access data from the glacier storage class you will see an +error like below. .IP .nf \f[C] @@ -8951,7 +9355,8 @@ the object(s) in question before using rclone. .SS Standard Options .PP Here are the standard options specific to s3 (Amazon S3 Compliant -Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)). +Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, +Minio, etc)). .SS \-\-s3\-provider .PP Choose your S3 provider. @@ -8973,6 +9378,12 @@ Examples: Amazon Web Services (AWS) S3 .RE .IP \[bu] 2 +"Alibaba" +.RS 2 +.IP \[bu] 2 +Alibaba Cloud Object Storage System (OSS) formerly Aliyun +.RE +.IP \[bu] 2 "Ceph" .RS 2 .IP \[bu] 2 @@ -9003,6 +9414,12 @@ IBM COS S3 Minio Object Storage .RE .IP \[bu] 2 +"Netease" +.RS 2 +.IP \[bu] 2 +Netease Object Storage (NOS) +.RE +.IP \[bu] 2 "Wasabi" .RS 2 .IP \[bu] 2 @@ -9141,6 +9558,14 @@ EU (London) Region Needs location constraint eu\-west\-2. .RE .IP \[bu] 2 +"eu\-north\-1" +.RS 2 +.IP \[bu] 2 +EU (Stockholm) Region +.IP \[bu] 2 +Needs location constraint eu\-north\-1. +.RE +.IP \[bu] 2 "eu\-central\-1" .RS 2 .IP \[bu] 2 @@ -9378,13 +9803,13 @@ EU Cross Region Amsterdam Private Endpoint "s3.eu\-gb.objectstorage.softlayer.net" .RS 2 .IP \[bu] 2 -Great Britan Endpoint +Great Britain Endpoint .RE .IP \[bu] 2 "s3.eu\-gb.objectstorage.service.networklayer.com" .RS 2 .IP \[bu] 2 -Great Britan Private Endpoint +Great Britain Private Endpoint .RE .IP \[bu] 2 "s3.ap\-geo.objectstorage.softlayer.net" @@ -9461,6 +9886,135 @@ Toronto Single Site Private Endpoint .RE .SS \-\-s3\-endpoint .PP +Endpoint for OSS API. +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_S3_ENDPOINT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +"oss\-cn\-hangzhou.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +East China 1 (Hangzhou) +.RE +.IP \[bu] 2 +"oss\-cn\-shanghai.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +East China 2 (Shanghai) +.RE +.IP \[bu] 2 +"oss\-cn\-qingdao.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +North China 1 (Qingdao) +.RE +.IP \[bu] 2 +"oss\-cn\-beijing.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +North China 2 (Beijing) +.RE +.IP \[bu] 2 +"oss\-cn\-zhangjiakou.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +North China 3 (Zhangjiakou) +.RE +.IP \[bu] 2 +"oss\-cn\-huhehaote.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +North China 5 (Huhehaote) +.RE +.IP \[bu] 2 +"oss\-cn\-shenzhen.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +South China 1 (Shenzhen) +.RE +.IP \[bu] 2 +"oss\-cn\-hongkong.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Hong Kong (Hong Kong) +.RE +.IP \[bu] 2 +"oss\-us\-west\-1.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +US West 1 (Silicon Valley) +.RE +.IP \[bu] 2 +"oss\-us\-east\-1.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +US East 1 (Virginia) +.RE +.IP \[bu] 2 +"oss\-ap\-southeast\-1.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Southeast Asia Southeast 1 (Singapore) +.RE +.IP \[bu] 2 +"oss\-ap\-southeast\-2.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Asia Pacific Southeast 2 (Sydney) +.RE +.IP \[bu] 2 +"oss\-ap\-southeast\-3.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Southeast Asia Southeast 3 (Kuala Lumpur) +.RE +.IP \[bu] 2 +"oss\-ap\-southeast\-5.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Asia Pacific Southeast 5 (Jakarta) +.RE +.IP \[bu] 2 +"oss\-ap\-northeast\-1.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Asia Pacific Northeast 1 (Japan) +.RE +.IP \[bu] 2 +"oss\-ap\-south\-1.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Asia Pacific South 1 (Mumbai) +.RE +.IP \[bu] 2 +"oss\-eu\-central\-1.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Central Europe 1 (Frankfurt) +.RE +.IP \[bu] 2 +"oss\-eu\-west\-1.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +West Europe (London) +.RE +.IP \[bu] 2 +"oss\-me\-east\-1.aliyuncs.com" +.RS 2 +.IP \[bu] 2 +Middle East 1 (Dubai) +.RE +.RE +.SS \-\-s3\-endpoint +.PP Endpoint for S3 API. Required when using an S3 clone. .IP \[bu] 2 @@ -9569,6 +10123,12 @@ EU (Ireland) Region. EU (London) Region. .RE .IP \[bu] 2 +"eu\-north\-1" +.RS 2 +.IP \[bu] 2 +EU (Stockholm) Region. +.RE +.IP \[bu] 2 "EU" .RS 2 .IP \[bu] 2 @@ -9678,7 +10238,7 @@ US East Region Flex "us\-south\-standard" .RS 2 .IP \[bu] 2 -US Sout hRegion Standard +US South Region Standard .RE .IP \[bu] 2 "us\-south\-vault" @@ -9726,25 +10286,25 @@ EU Cross Region Flex "eu\-gb\-standard" .RS 2 .IP \[bu] 2 -Great Britan Standard +Great Britain Standard .RE .IP \[bu] 2 "eu\-gb\-vault" .RS 2 .IP \[bu] 2 -Great Britan Vault +Great Britain Vault .RE .IP \[bu] 2 "eu\-gb\-cold" .RS 2 .IP \[bu] 2 -Great Britan Cold +Great Britain Cold .RE .IP \[bu] 2 "eu\-gb\-flex" .RS 2 .IP \[bu] 2 -Great Britan Flex +Great Britain Flex .RE .IP \[bu] 2 "ap\-standard" @@ -9836,6 +10396,9 @@ Default: "" .PP Canned ACL used when creating buckets and storing or copying objects. .PP +This ACL is used for creating objects and if bucket_acl isn\[aq]t set, +for creating buckets too. +.PP For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl .PP @@ -10043,18 +10606,128 @@ Standard Infrequent Access storage class .IP \[bu] 2 One Zone Infrequent Access storage class .RE +.IP \[bu] 2 +"GLACIER" +.RS 2 +.IP \[bu] 2 +Glacier storage class +.RE +.RE +.SS \-\-s3\-storage\-class +.PP +The storage class to use when storing new objects in OSS. +.IP \[bu] 2 +Config: storage_class +.IP \[bu] 2 +Env Var: RCLONE_S3_STORAGE_CLASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +"" +.RS 2 +.IP \[bu] 2 +Default +.RE +.IP \[bu] 2 +"STANDARD" +.RS 2 +.IP \[bu] 2 +Standard storage class +.RE +.IP \[bu] 2 +"GLACIER" +.RS 2 +.IP \[bu] 2 +Archive storage mode. +.RE +.IP \[bu] 2 +"STANDARD_IA" +.RS 2 +.IP \[bu] 2 +Infrequent access storage mode. +.RE .RE .SS Advanced Options .PP Here are the advanced options specific to s3 (Amazon S3 Compliant -Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)). +Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, +Minio, etc)). +.SS \-\-s3\-bucket\-acl +.PP +Canned ACL used when creating buckets. +.PP +For more info visit +https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl +.PP +Note that this ACL is applied when only when creating buckets. +If it isn\[aq]t set then "acl" is used instead. +.IP \[bu] 2 +Config: bucket_acl +.IP \[bu] 2 +Env Var: RCLONE_S3_BUCKET_ACL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +"private" +.RS 2 +.IP \[bu] 2 +Owner gets FULL_CONTROL. +No one else has access rights (default). +.RE +.IP \[bu] 2 +"public\-read" +.RS 2 +.IP \[bu] 2 +Owner gets FULL_CONTROL. +The AllUsers group gets READ access. +.RE +.IP \[bu] 2 +"public\-read\-write" +.RS 2 +.IP \[bu] 2 +Owner gets FULL_CONTROL. +The AllUsers group gets READ and WRITE access. +.IP \[bu] 2 +Granting this on a bucket is generally not recommended. +.RE +.IP \[bu] 2 +"authenticated\-read" +.RS 2 +.IP \[bu] 2 +Owner gets FULL_CONTROL. +The AuthenticatedUsers group gets READ access. +.RE +.RE +.SS \-\-s3\-upload\-cutoff +.PP +Cutoff for switching to chunked upload +.PP +Any files larger than this will be uploaded in chunks of chunk_size. +The minimum is 0 and the maximum is 5GB. +.IP \[bu] 2 +Config: upload_cutoff +.IP \[bu] 2 +Env Var: RCLONE_S3_UPLOAD_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 200M .SS \-\-s3\-chunk\-size .PP Chunk size to use for uploading. .PP -Any files larger than this will be uploaded in chunks of this size. -The default is 5MB. -The minimum is 5MB. +When uploading files larger than upload_cutoff they will be uploaded as +multipart uploads using this chunk size. .PP Note that "\-\-s3\-upload\-concurrency" chunks of this size are buffered in memory per transfer. @@ -10108,7 +10781,7 @@ Env Var: RCLONE_S3_UPLOAD_CONCURRENCY .IP \[bu] 2 Type: int .IP \[bu] 2 -Default: 2 +Default: 4 .SS \-\-s3\-force\-path\-style .PP If true use path style access if false use virtual hosted style. @@ -10631,6 +11304,33 @@ So once set up, for example to copy files into a bucket rclone\ copy\ /path/to/files\ minio:bucket \f[] .fi +.SS Scaleway +.PP +Scaleway (https://www.scaleway.com/object-storage/) The Object Storage +platform allows you to store anything from backups, logs and web assets +to documents and photos. +Files can be dropped from the Scaleway console or transferred through +our API and CLI or using any S3\-compatible tool. +.PP +Scaleway provides an S3 interface which can be configured for use with +rclone like this: +.IP +.nf +\f[C] +[scaleway] +type\ =\ s3 +env_auth\ =\ false +endpoint\ =\ s3.nl\-ams.scw.cloud +access_key_id\ =\ SCWXXXXXXXXXXXXXX +secret_access_key\ =\ 1111111\-2222\-3333\-44444\-55555555555555 +region\ =\ nl\-ams +location_constraint\ = +acl\ =\ private +force_path_style\ =\ false +server_side_encryption\ = +storage_class\ = +\f[] +.fi .SS Wasabi .PP Wasabi (https://wasabi.com) is a cloud\-based object storage service for @@ -10749,31 +11449,47 @@ server_side_encryption\ = storage_class\ = \f[] .fi -.SS Aliyun OSS / Netease NOS +.SS Alibaba OSS .PP -This describes how to set up Aliyun OSS \- Netease NOS is the same -except for different endpoints. -.PP -Note this is a pretty standard S3 setup, except for the setting of -\f[C]force_path_style\ =\ false\f[] in the advanced config. +Here is an example of making an Alibaba Cloud (Aliyun) +OSS (https://www.alibabacloud.com/product/oss/) configuration. +First run: .IP .nf \f[C] -#\ rclone\ config -e/n/d/r/c/s/q>\ n +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n name>\ oss Type\ of\ storage\ to\ configure. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 3\ /\ Amazon\ S3\ Compliant\ Storage\ Providers\ (AWS,\ Ceph,\ Dreamhost,\ IBM\ COS,\ Minio) +[snip] +\ 4\ /\ Amazon\ S3\ Compliant\ Storage\ Provider\ (AWS,\ Alibaba,\ Ceph,\ Digital\ Ocean,\ Dreamhost,\ IBM\ COS,\ Minio,\ etc) \ \ \ \\\ "s3" +[snip] Storage>\ s3 Choose\ your\ S3\ provider. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 8\ /\ Any\ other\ S3\ compatible\ provider -\ \ \ \\\ "Other" -provider>\ other +\ 1\ /\ Amazon\ Web\ Services\ (AWS)\ S3 +\ \ \ \\\ "AWS" +\ 2\ /\ Alibaba\ Cloud\ Object\ Storage\ System\ (OSS)\ formerly\ Aliyun +\ \ \ \\\ "Alibaba" +\ 3\ /\ Ceph\ Object\ Storage +\ \ \ \\\ "Ceph" +[snip] +provider>\ Alibaba Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars). Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank. Enter\ a\ boolean\ value\ (true\ or\ false).\ Press\ Enter\ for\ the\ default\ ("false"). @@ -10786,67 +11502,62 @@ env_auth>\ 1 AWS\ Access\ Key\ ID. Leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). -access_key_id>\ xxxxxxxxxxxx +access_key_id>\ accesskeyid AWS\ Secret\ Access\ Key\ (password) Leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). -secret_access_key>\ xxxxxxxxxxxxxxxxx -Region\ to\ connect\ to. -Leave\ blank\ if\ you\ are\ using\ an\ S3\ clone\ and\ you\ don\[aq]t\ have\ a\ region. +secret_access_key>\ secretaccesskey +Endpoint\ for\ OSS\ API. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Use\ this\ if\ unsure.\ Will\ use\ v4\ signatures\ and\ an\ empty\ region. -\ \ \ \\\ "" -\ 2\ /\ Use\ this\ only\ if\ v4\ signatures\ don\[aq]t\ work,\ eg\ pre\ Jewel/v10\ CEPH. -\ \ \ \\\ "other\-v2\-signature" -region>\ 1 -Endpoint\ for\ S3\ API. -Required\ when\ using\ an\ S3\ clone. -Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -endpoint>\ oss\-cn\-shenzhen.aliyuncs.com -Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region. -Leave\ blank\ if\ not\ sure.\ Used\ when\ creating\ buckets\ only. -Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). -location_constraint> -Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3. -For\ more\ info\ visit\ https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl +\ 1\ /\ East\ China\ 1\ (Hangzhou) +\ \ \ \\\ "oss\-cn\-hangzhou.aliyuncs.com" +\ 2\ /\ East\ China\ 2\ (Shanghai) +\ \ \ \\\ "oss\-cn\-shanghai.aliyuncs.com" +\ 3\ /\ North\ China\ 1\ (Qingdao) +\ \ \ \\\ "oss\-cn\-qingdao.aliyuncs.com" +[snip] +endpoint>\ 1 +Canned\ ACL\ used\ when\ creating\ buckets\ and\ storing\ or\ copying\ objects. + +Note\ that\ this\ ACL\ is\ applied\ when\ server\ side\ copying\ objects\ as\ S3 +doesn\[aq]t\ copy\ the\ ACL\ from\ the\ source\ but\ rather\ writes\ a\ fresh\ one. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Owner\ gets\ FULL_CONTROL.\ No\ one\ else\ has\ access\ rights\ (default). \ \ \ \\\ "private" +\ 2\ /\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ access. +\ \ \ \\\ "public\-read" +\ \ \ /\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ and\ WRITE\ access. +[snip] acl>\ 1 +The\ storage\ class\ to\ use\ when\ storing\ new\ objects\ in\ OSS. +Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Default +\ \ \ \\\ "" +\ 2\ /\ Standard\ storage\ class +\ \ \ \\\ "STANDARD" +\ 3\ /\ Archive\ storage\ mode. +\ \ \ \\\ "GLACIER" +\ 4\ /\ Infrequent\ access\ storage\ mode. +\ \ \ \\\ "STANDARD_IA" +storage_class>\ 1 Edit\ advanced\ config?\ (y/n) y)\ Yes n)\ No -y/n>\ y -Chunk\ size\ to\ use\ for\ uploading -Enter\ a\ size\ with\ suffix\ k,M,G,T.\ Press\ Enter\ for\ the\ default\ ("5M"). -chunk_size> -Don\[aq]t\ store\ MD5\ checksum\ with\ object\ metadata -Enter\ a\ boolean\ value\ (true\ or\ false).\ Press\ Enter\ for\ the\ default\ ("false"). -disable_checksum> -An\ AWS\ session\ token -Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). -session_token> -Concurrency\ for\ multipart\ uploads. -Enter\ a\ signed\ integer.\ Press\ Enter\ for\ the\ default\ ("2"). -upload_concurrency> -If\ true\ use\ path\ style\ access\ if\ false\ use\ virtual\ hosted\ style. -Some\ providers\ (eg\ Aliyun\ OSS\ or\ Netease\ COS)\ require\ this. -Enter\ a\ boolean\ value\ (true\ or\ false).\ Press\ Enter\ for\ the\ default\ ("true"). -force_path_style>\ false +y/n>\ n Remote\ config \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [oss] type\ =\ s3 -provider\ =\ Other +provider\ =\ Alibaba env_auth\ =\ false -access_key_id\ =\ xxxxxxxxx -secret_access_key\ =\ xxxxxxxxxxxxx -endpoint\ =\ oss\-cn\-shenzhen.aliyuncs.com +access_key_id\ =\ accesskeyid +secret_access_key\ =\ secretaccesskey +endpoint\ =\ oss\-cn\-hangzhou.aliyuncs.com acl\ =\ private -force_path_style\ =\ false +storage_class\ =\ Standard \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK e)\ Edit\ this\ remote @@ -10854,6 +11565,12 @@ d)\ Delete\ this\ remote y/e/d>\ y \f[] .fi +.SS Netease NOS +.PP +For Netease NOS configure as per the configurator +\f[C]rclone\ config\f[] setting the provider \f[C]Netease\f[]. +This will automatically set \f[C]force_path_style\ =\ false\f[] which is +necessary for it to run properly. .SS Backblaze B2 .PP B2 is Backblaze\[aq]s cloud storage @@ -10873,8 +11590,11 @@ rclone\ config .fi .PP This will guide you through an interactive setup process. -You will need your account number (a short hex number) and key (a long -hex number) which you can get from the b2 control panel. +To authenticate you will either need your Account ID (a short hex +number) and Master Application Key (a long hex number) OR an Application +Key, which is the recommended method. +See below for further details on generating and using an Application +Key. .IP .nf \f[C] @@ -10971,15 +11691,16 @@ rclone\ sync\ /home/local/directory\ remote:bucket B2 supports multiple Application Keys for different access permission to B2 Buckets (https://www.backblaze.com/b2/docs/application_keys.html). .PP -You can use these with rclone too. +You can use these with rclone too; you will need to use rclone version +1.43 or later. .PP Follow Backblaze\[aq]s docs to create an Application Key with the -required permission and add the \f[C]Application\ Key\ ID\f[] as the +required permission and add the \f[C]applicationKeyId\f[] as the \f[C]account\f[] and the \f[C]Application\ Key\f[] itself as the \f[C]key\f[]. .PP -Note that you must put the Application Key ID as the \f[C]account\f[] \- -you can\[aq]t use the master Account ID. +Note that you must put the \f[I]applicationKeyId\f[] as the +\f[C]account\f[] \[en] you can\[aq]t use the master Account ID. If you try then B2 will return 401 errors. .SS \-\-fast\-list .PP @@ -11055,8 +11776,8 @@ the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg \f[C]rclone\ cleanup\ remote:bucket/path/to/stuff\f[]. .PP -Note that \f[C]cleanup\f[] does not remove partially uploaded files from -the bucket. +Note that \f[C]cleanup\f[] will remove partially uploaded files from the +bucket if they are more than a day old. .PP When you \f[C]purge\f[] a bucket, the current and the old versions will be deleted then the bucket will be deleted. @@ -11299,7 +12020,7 @@ Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "\-\-transfers" chunks in progress at once. -5,000,000 Bytes is the minimim size. +5,000,000 Bytes is the minimum size. .IP \[bu] 2 Config: chunk_size .IP \[bu] 2 @@ -11308,6 +12029,17 @@ Env Var: RCLONE_B2_CHUNK_SIZE Type: SizeSuffix .IP \[bu] 2 Default: 96M +.SS \-\-b2\-disable\-checksum +.PP +Disable checksums for large (> upload cutoff) files +.IP \[bu] 2 +Config: disable_checksum +.IP \[bu] 2 +Env Var: RCLONE_B2_DISABLE_CHECKSUM +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS Box .PP Paths are specified as \f[C]remote:path\f[] @@ -11436,6 +12168,16 @@ To copy a local directory to an Box directory called backup rclone\ copy\ /home/source\ remote:backup \f[] .fi +.SS Using rclone with an Enterprise account with SSO +.PP +If you have an "Enterprise" account type with Box with single sign on +(SSO), you need to create a password to use Box with rclone. +This can be done at your Enterprise Box account by going to Settings, +"Account" Tab, and then set the password in the "Authentication" field. +.PP +Once you have done this, you can setup your Enterprise Box account using +the same procedure detailed above in the, using the password you have +just set. .SS Invalid refresh token .PP According to the box @@ -13343,6 +14085,9 @@ Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but Note that \f[C]\-\-bind\f[] isn\[aq]t supported. .PP FTP could support server side move but doesn\[aq]t yet. +.PP +Note that the ftp backend does not support the \f[C]ftp_proxy\f[] +environment variable yet. .SS Google Cloud Storage .PP Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for @@ -13800,12 +14545,24 @@ Multi\-regional location for United States. Taiwan. .RE .IP \[bu] 2 +"asia\-east2" +.RS 2 +.IP \[bu] 2 +Hong Kong. +.RE +.IP \[bu] 2 "asia\-northeast1" .RS 2 .IP \[bu] 2 Tokyo. .RE .IP \[bu] 2 +"asia\-south1" +.RS 2 +.IP \[bu] 2 +Mumbai. +.RE +.IP \[bu] 2 "asia\-southeast1" .RS 2 .IP \[bu] 2 @@ -13818,6 +14575,12 @@ Singapore. Sydney. .RE .IP \[bu] 2 +"europe\-north1" +.RS 2 +.IP \[bu] 2 +Finland. +.RE +.IP \[bu] 2 "europe\-west1" .RS 2 .IP \[bu] 2 @@ -13830,6 +14593,18 @@ Belgium. London. .RE .IP \[bu] 2 +"europe\-west3" +.RS 2 +.IP \[bu] 2 +Frankfurt. +.RE +.IP \[bu] 2 +"europe\-west4" +.RS 2 +.IP \[bu] 2 +Netherlands. +.RE +.IP \[bu] 2 "us\-central1" .RS 2 .IP \[bu] 2 @@ -13853,6 +14628,12 @@ Northern Virginia. .IP \[bu] 2 Oregon. .RE +.IP \[bu] 2 +"us\-west2" +.RS 2 +.IP \[bu] 2 +California. +.RE .RE .SS \-\-gcs\-storage\-class .PP @@ -15062,6 +15843,28 @@ Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE Type: SizeSuffix .IP \[bu] 2 Default: off +.SS \-\-drive\-pacer\-min\-sleep +.PP +Minimum time to sleep between API calls. +.IP \[bu] 2 +Config: pacer_min_sleep +.IP \[bu] 2 +Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP +.IP \[bu] 2 +Type: Duration +.IP \[bu] 2 +Default: 100ms +.SS \-\-drive\-pacer\-burst +.PP +Number of API calls to allow without sleeping. +.IP \[bu] 2 +Config: pacer_burst +.IP \[bu] 2 +Env Var: RCLONE_DRIVE_PACER_BURST +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 100 .SS Limitations .PP Drive has quite a lot of rate limiting. @@ -15120,10 +15923,13 @@ each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. .PP -However you might find you get better performance making your own -client_id if you are a heavy user. -Or you may not depending on exactly how Google have been raising -rclone\[aq]s rate limit. +It is strongly recommended to use your own client ID as the default +rclone ID is heavily used. +If you have multiple services running, it is recommended to use an API +key for each service. +The default Google quota is 10 transactions per second so it is +recommended to stay under that number as if you use more than that, it +will cause rclone to rate limit and make things slower. .PP Here is how to create your own Google Drive client ID for rclone: .IP "1." 3 @@ -15314,6 +16120,12 @@ Examples: .IP \[bu] 2 Connect to example.com .RE +.IP \[bu] 2 +"https://user:pass\@example.com" +.RS 2 +.IP \[bu] 2 +Connect to example.com using a username and password +.RE .RE .SS Hubic .PP @@ -15504,6 +16316,26 @@ Env Var: RCLONE_HUBIC_CHUNK_SIZE Type: SizeSuffix .IP \[bu] 2 Default: 5G +.SS \-\-hubic\-no\-chunk +.PP +Don\[aq]t chunk files during streaming upload. +.PP +When doing streaming uploads (eg using rcat or mount) setting this flag +will cause the swift backend to not upload chunked files. +.PP +This will limit the maximum upload size to 5GB. +However non chunked files are easier to deal with and have an MD5SUM. +.PP +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. +.IP \[bu] 2 +Config: no_chunk +.IP \[bu] 2 +Env Var: RCLONE_HUBIC_NO_CHUNK +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS Limitations .PP This uses the normal OpenStack Swift mechanism to refresh the Swift API @@ -15658,7 +16490,7 @@ limit (unless it is unlimited) and the current usage. Here are the standard options specific to jottacloud (JottaCloud). .SS \-\-jottacloud\-user .PP -User Name +User Name: .IP \[bu] 2 Config: user .IP \[bu] 2 @@ -15667,17 +16499,6 @@ Env Var: RCLONE_JOTTACLOUD_USER Type: string .IP \[bu] 2 Default: "" -.SS \-\-jottacloud\-pass -.PP -Password. -.IP \[bu] 2 -Config: pass -.IP \[bu] 2 -Env Var: RCLONE_JOTTACLOUD_PASS -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: "" .SS \-\-jottacloud\-mountpoint .PP The mountpoint to use. @@ -15745,6 +16566,17 @@ Env Var: RCLONE_JOTTACLOUD_UNLINK Type: bool .IP \[bu] 2 Default: false +.SS \-\-jottacloud\-upload\-resume\-limit +.PP +Files bigger than this can be resumed if the upload fail\[aq]s. +.IP \[bu] 2 +Config: upload_resume_limit +.IP \[bu] 2 +Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 10M .SS Limitations .PP Note that Jottacloud is case insensitive so you can\[aq]t have a file @@ -16549,12 +17381,23 @@ equivalent. For example if a file has a \f[C]?\f[] in it will be mapped to \f[C]?\f[] instead. .PP -The largest allowed file size is 10GiB (10,737,418,240 bytes). +The largest allowed file sizes are 15GB for OneDrive for Business and +35GB for OneDrive Personal (Updated 4 Jan 2019). +.PP +The entire path, including the file name, must contain fewer than 400 +characters for OneDrive, OneDrive for Business and SharePoint Online. +If you are encrypting file and folder names with rclone, you may want to +pay attention to this limitation because the encrypted names are +typically longer than the original ones. .PP OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like \f[C]couldn't\ list\ files:\ UnknownError:\f[]. See #2707 (https://github.com/ncw/rclone/issues/2707) for more info. +.PP +An official document about the limitations for different types of +OneDrive can be found +here (https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). .SS Versioning issue .PP Every change in OneDrive causes the service to create a new version. @@ -16566,6 +17409,33 @@ The \f[C]copy\f[] is the only rclone command affected by this as we copy the file and then afterwards set the modification time to match the source file. .PP +\f[B]Note\f[]: Starting October 2018, users will no longer be able to +disable versioning by default. +This is because Microsoft has brought an +update (https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) +to the mechanism. +To change this new default setting, a PowerShell command is required to +be run by a SharePoint admin. +If you are an admin, you can run these commands in PowerShell to change +that setting: +.IP "1." 3 +\f[C]Install\-Module\ \-Name\ Microsoft.Online.SharePoint.PowerShell\f[] +(in case you haven\[aq]t installed this already) +.IP "2." 3 +\f[C]Import\-Module\ Microsoft.Online.SharePoint.PowerShell\ \-DisableNameChecking\f[] +.IP "3." 3 +\f[C]Connect\-SPOService\ \-Url\ https://YOURSITE\-admin.sharepoint.com\ \-Credential\ YOU\@YOURSITE.COM\f[] +(replacing \f[C]YOURSITE\f[], \f[C]YOU\f[], \f[C]YOURSITE.COM\f[] with +the actual values; this will prompt for your credentials) +.IP "4." 3 +\f[C]Set\-SPOTenant\ \-EnableMinimumVersionRequirement\ $False\f[] +.IP "5." 3 +\f[C]Disconnect\-SPOService\f[] (to disconnect from the server) +.PP +\f[I]Below are the steps for normal users to disable versioning. If you +don\[aq]t see the "No Versioning" option, make sure the above +requirements are met.\f[] +.PP User Weropol (https://github.com/Weropol) has found a method to disable versioning on OneDrive .IP "1." 3 @@ -17055,6 +17925,61 @@ Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES Type: int .IP \[bu] 2 Default: 3 +.SS \-\-qingstor\-upload\-cutoff +.PP +Cutoff for switching to chunked upload +.PP +Any files larger than this will be uploaded in chunks of chunk_size. +The minimum is 0 and the maximum is 5GB. +.IP \[bu] 2 +Config: upload_cutoff +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 200M +.SS \-\-qingstor\-chunk\-size +.PP +Chunk size to use for uploading. +.PP +When uploading files larger than upload_cutoff they will be uploaded as +multipart uploads using this chunk size. +.PP +Note that "\-\-qingstor\-upload\-concurrency" chunks of this size are +buffered in memory per transfer. +.PP +If you are transferring large files over high speed links and you have +enough memory, then increasing this will speed up the transfers. +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 4M +.SS \-\-qingstor\-upload\-concurrency +.PP +Concurrency for multipart uploads. +.PP +This is the number of chunks of the same file that are uploaded +concurrently. +.PP +NB if you set this to > 1 then the checksums of multpart uploads become +corrupted (the uploads themselves are not corrupted though). +.PP +If you are uploading small numbers of large file over high speed link +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. +.IP \[bu] 2 +Config: upload_concurrency +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 1 .SS Swift .PP Swift refers to Openstack Object @@ -17544,6 +18469,39 @@ Env Var: RCLONE_SWIFT_AUTH_TOKEN Type: string .IP \[bu] 2 Default: "" +.SS \-\-swift\-application\-credential\-id +.PP +Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) +.IP \[bu] 2 +Config: application_credential_id +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.SS \-\-swift\-application\-credential\-name +.PP +Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) +.IP \[bu] 2 +Config: application_credential_name +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.SS \-\-swift\-application\-credential\-secret +.PP +Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) +.IP \[bu] 2 +Config: application_credential_secret +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" .SS \-\-swift\-auth\-version .PP AuthVersion \- optional \- set to (1,2,3) if your auth URL has no @@ -17645,6 +18603,26 @@ Env Var: RCLONE_SWIFT_CHUNK_SIZE Type: SizeSuffix .IP \[bu] 2 Default: 5G +.SS \-\-swift\-no\-chunk +.PP +Don\[aq]t chunk files during streaming upload. +.PP +When doing streaming uploads (eg using rcat or mount) setting this flag +will cause the swift backend to not upload chunked files. +.PP +This will limit the maximum upload size to 5GB. +However non chunked files are easier to deal with and have an MD5SUM. +.PP +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. +.IP \[bu] 2 +Config: no_chunk +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_NO_CHUNK +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS Modified time .PP The modified time is stored as metadata on the object as @@ -17993,12 +18971,21 @@ Key file .IP \[bu] 2 ssh\-agent .PP -Key files should be unencrypted PEM\-encoded private key files. +Key files should be PEM\-encoded private key files. For instance \f[C]/home/$USER/.ssh/id_rsa\f[]. +Only unencrypted OpenSSH or PEM encrypted files are supported. .PP If you don\[aq]t specify \f[C]pass\f[] or \f[C]key_file\f[] then rclone will attempt to contact an ssh\-agent. .PP +You can also specify \f[C]key_use_agent\f[] to force the usage of an +ssh\-agent. +In this case \f[C]key_file\f[] can also be specified to force the usage +of a specific key in the ssh\-agent. +.PP +Using an ssh\-agent is the only way to load encrypted OpenSSH keys at +the moment. +.PP If you set the \f[C]\-\-sftp\-ask\-password\f[] option, rclone will prompt for a password when needed and no password has been configured. .SS ssh\-agent on macOS @@ -18094,8 +19081,8 @@ Type: string Default: "" .SS \-\-sftp\-key\-file .PP -Path to unencrypted PEM\-encoded private key file, leave blank to use -ssh\-agent. +Path to PEM\-encoded private key file, leave blank or set +key\-use\-agent to use ssh\-agent. .IP \[bu] 2 Config: key_file .IP \[bu] 2 @@ -18104,6 +19091,37 @@ Env Var: RCLONE_SFTP_KEY_FILE Type: string .IP \[bu] 2 Default: "" +.SS \-\-sftp\-key\-file\-pass +.PP +The passphrase to decrypt the PEM\-encoded private key file. +.PP +Only PEM encrypted key files (old OpenSSH format) are supported. +Encrypted keys in the new OpenSSH format can\[aq]t be used. +.IP \[bu] 2 +Config: key_file_pass +.IP \[bu] 2 +Env Var: RCLONE_SFTP_KEY_FILE_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.SS \-\-sftp\-key\-use\-agent +.PP +When set forces the usage of the ssh\-agent. +.PP +When key\-file is also set, the ".pub" file of the specified key\-file +is read and only the associated key is requested from the ssh\-agent. +This allows to avoid +\f[C]Too\ many\ authentication\ failures\ for\ *username*\f[] errors +when the ssh\-agent contains many keys. +.IP \[bu] 2 +Config: key_use_agent +.IP \[bu] 2 +Env Var: RCLONE_SFTP_KEY_USE_AGENT +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS \-\-sftp\-use\-insecure\-cipher .PP Enable the use of the aes128\-cbc cipher. @@ -18539,7 +19557,11 @@ Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times. .PP -Hashes are not supported. +Likewise plain WebDAV does not support hashes, however when used with +Owncloud or Nexcloud rclone will support SHA1 and MD5 hashes. +Depending on the exact version of Owncloud or Nextcloud hashes may +appear on all objects, or only on objects which had a hash uploaded with +them. .SS Standard Options .PP Here are the standard options specific to webdav (Webdav). @@ -18730,8 +19752,8 @@ pass\ =\ encryptedpassword .fi .SS dCache .PP -dCache is a storage system with WebDAV doors that support, beside basic -and x509, authentication with +dCache (https://www.dcache.org/) is a storage system with WebDAV doors +that support, beside basic and x509, authentication with Macaroons (https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf) (bearer tokens). .PP @@ -18754,7 +19776,7 @@ bearer_token\ =\ your\-macaroon .fi .PP There is a -script (https://github.com/onnozweers/dcache-scripts/blob/master/get-share-link) +script (https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. .SS Yandex Disk @@ -18903,6 +19925,20 @@ This command does not take any path arguments. To view your current quota you can use the \f[C]rclone\ about\ remote:\f[] command which will display your usage limit (quota) and the current usage. +.SS Limitations +.PP +When uploading very large files (bigger than about 5GB) you will need to +increase the \f[C]\-\-timeout\f[] parameter. +This is because Yandex pauses (perhaps to calculate the MD5SUM for the +entire file) before returning confirmation that the file has been +uploaded. +The default handling of timeouts in rclone is to assume a 5 minute pause +is an error and close the connection \- you\[aq]ll see +\f[C]net/http:\ timeout\ awaiting\ response\ headers\f[] errors in the +logs if this is happening. +Setting the timeout to twice the max size of file in GB should be +enough, so if you want to upload a 30GB file set a timeout of +\f[C]2\ *\ 30\ =\ 60m\f[], that is \f[C]\-\-timeout\ 60m\f[]. .SS Standard Options .PP Here are the standard options specific to yandex (Yandex Disk). @@ -19036,6 +20072,8 @@ like symlinks under Windows). .PP If you supply \f[C]\-\-copy\-links\f[] or \f[C]\-L\f[] then rclone will follow the symlink and copy the pointed to file or directory. +Note that this flag is incompatible with \f[C]\-links\f[] / +\f[C]\-l\f[]. .PP This flag applies to all commands. .PP @@ -19075,6 +20113,89 @@ $\ rclone\ \-L\ ls\ /tmp/a \ \ \ \ \ \ \ \ 6\ b/one \f[] .fi +.SS \-\-links, \-l +.PP +Normally rclone will ignore symlinks or junction points (which behave +like symlinks under Windows). +.PP +If you supply this flag then rclone will copy symbolic links from the +local storage, and store them as text files, with a +\[aq].rclonelink\[aq] suffix in the remote storage. +.PP +The text file will contain the target of the symbolic link (see +example). +.PP +This flag applies to all commands. +.PP +For example, supposing you have a directory structure like this +.IP +.nf +\f[C] +$\ tree\ /tmp/a +/tmp/a +├──\ file1\ \->\ ./file4 +└──\ file2\ \->\ /home/user/file3 +\f[] +.fi +.PP +Copying the entire directory with \[aq]\-l\[aq] +.IP +.nf +\f[C] +$\ rclone\ copyto\ \-l\ /tmp/a/file1\ remote:/tmp/a/ +\f[] +.fi +.PP +The remote files are created with a \[aq].rclonelink\[aq] suffix +.IP +.nf +\f[C] +$\ rclone\ ls\ remote:/tmp/a +\ \ \ \ \ \ \ 5\ file1.rclonelink +\ \ \ \ \ \ 14\ file2.rclonelink +\f[] +.fi +.PP +The remote files will contain the target of the symbolic links +.IP +.nf +\f[C] +$\ rclone\ cat\ remote:/tmp/a/file1.rclonelink +\&./file4 + +$\ rclone\ cat\ remote:/tmp/a/file2.rclonelink +/home/user/file3 +\f[] +.fi +.PP +Copying them back with \[aq]\-l\[aq] +.IP +.nf +\f[C] +$\ rclone\ copyto\ \-l\ remote:/tmp/a/\ /tmp/b/ + +$\ tree\ /tmp/b +/tmp/b +├──\ file1\ \->\ ./file4 +└──\ file2\ \->\ /home/user/file3 +\f[] +.fi +.PP +However, if copied back without \[aq]\-l\[aq] +.IP +.nf +\f[C] +$\ rclone\ copyto\ remote:/tmp/a/\ /tmp/b/ + +$\ tree\ /tmp/b +/tmp/b +├──\ file1.rclonelink +└──\ file2.rclonelink +\f[] +.fi +.PP +Note that this flag is incompatible with \f[C]\-copy\-links\f[] / +\f[C]\-L\f[]. .SS Restricting filesystems with \-\-one\-file\-system .PP Normally rclone will recurse through filesystems as mounted. @@ -19163,6 +20284,18 @@ Env Var: RCLONE_LOCAL_COPY_LINKS Type: bool .IP \[bu] 2 Default: false +.SS \-\-links +.PP +Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] +extension +.IP \[bu] 2 +Config: links +.IP \[bu] 2 +Env Var: RCLONE_LOCAL_LINKS +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS \-\-skip\-links .PP Don\[aq]t warn about skipped symlinks. @@ -19223,6 +20356,377 @@ Type: bool .IP \[bu] 2 Default: false .SH Changelog +.SS v1.46 \- 2019\-02\-09 +.IP \[bu] 2 +New backends +.RS 2 +.IP \[bu] 2 +Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +New commands +.RS 2 +.IP \[bu] 2 +serve dlna: serves a remove via DLNA for the local network (nicolov) +.RE +.IP \[bu] 2 +New Features +.RS 2 +.IP \[bu] 2 +copy, move: Restore deprecated \f[C]\-\-no\-traverse\f[] flag (Nick +Craig\-Wood) +.RS 2 +.IP \[bu] 2 +This is useful for when transferring a small number of files into a +large destination +.RE +.IP \[bu] 2 +genautocomplete: Add remote path completion for bash completion +(Christopher Peterson & Danil Semelenov) +.IP \[bu] 2 +Buffer memory handling reworked to return memory to the OS better (Nick +Craig\-Wood) +.RS 2 +.IP \[bu] 2 +Buffer recycling library to replace sync.Pool +.IP \[bu] 2 +Optionally use memory mapped memory for better memory shrinking +.IP \[bu] 2 +Enable with \f[C]\-\-use\-mmap\f[] if having memory problems \- not +default yet +.RE +.IP \[bu] 2 +Parallelise reading of files specified by \f[C]\-\-files\-from\f[] (Nick +Craig\-Wood) +.IP \[bu] 2 +check: Add stats showing total files matched. +(Dario Guzik) +.IP \[bu] 2 +Allow rename/delete open files under Windows (Nick Craig\-Wood) +.IP \[bu] 2 +lsjson: Use exactly the correct number of decimal places in the seconds +(Nick Craig\-Wood) +.IP \[bu] 2 +Add cookie support with cmdline switch \f[C]\-\-use\-cookies\f[] for all +HTTP based remotes (qip) +.IP \[bu] 2 +Warn if \f[C]\-\-checksum\f[] is set but there are no hashes available +(Nick Craig\-Wood) +.IP \[bu] 2 +Rework rate limiting (pacer) to be more accurate and allow bursting +(Nick Craig\-Wood) +.IP \[bu] 2 +Improve error reporting for too many/few arguments in commands (Nick +Craig\-Wood) +.IP \[bu] 2 +listremotes: Remove \f[C]\-l\f[] short flag as it conflicts with the new +global flag (weetmuts) +.IP \[bu] 2 +Make http serving with auth generate INFO messages on auth fail (Nick +Craig\-Wood) +.RE +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +Fix layout of stats (Nick Craig\-Wood) +.IP \[bu] 2 +Fix \f[C]\-\-progress\f[] crash under Windows Jenkins (Nick Craig\-Wood) +.IP \[bu] 2 +Fix transfer of google/onedrive docs by calling Rcat in Copy when size +is \-1 (Cnly) +.IP \[bu] 2 +copyurl: Fix checking of \f[C]\-\-dry\-run\f[] (Denis Skovpen) +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +Check that mountpoint and local directory to mount don\[aq]t overlap +(Nick Craig\-Wood) +.IP \[bu] 2 +Fix mount size under 32 bit Windows (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Implement renaming of directories for backends without DirMove (Nick +Craig\-Wood) +.RS 2 +.IP \[bu] 2 +now all backends except b2 support renaming directories +.RE +.IP \[bu] 2 +Implement \f[C]\-\-vfs\-cache\-max\-size\f[] to limit the total size of +the cache (Nick Craig\-Wood) +.IP \[bu] 2 +Add \f[C]\-\-dir\-perms\f[] and \f[C]\-\-file\-perms\f[] flags to set +default permissions (Nick Craig\-Wood) +.IP \[bu] 2 +Fix deadlock on concurrent operations on a directory (Nick Craig\-Wood) +.IP \[bu] 2 +Fix deadlock between RWFileHandle.close and File.Remove (Nick +Craig\-Wood) +.IP \[bu] 2 +Fix renaming/deleting open files with cache mode "writes" under Windows +(Nick Craig\-Wood) +.IP \[bu] 2 +Fix panic on rename with \f[C]\-\-dry\-run\f[] set (Nick Craig\-Wood) +.IP \[bu] 2 +Fix vfs/refresh with recurse=true needing the \f[C]\-\-fast\-list\f[] +flag +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Add support for \f[C]\-l\f[]/\f[C]\-\-links\f[] (symbolic link +translation) (yair\@unicorn) +.RS 2 +.IP \[bu] 2 +this works by showing links as \f[C]link.rclonelink\f[] \- see local +backend docs for more info +.IP \[bu] 2 +this errors if used with \f[C]\-L\f[]/\f[C]\-\-copy\-links\f[] +.RE +.IP \[bu] 2 +Fix renaming/deleting open files on Windows (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +Crypt +.RS 2 +.IP \[bu] 2 +Check for maximum length before decrypting filename to fix panic (Garry +McNulty) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +Allow building azureblob backend on *BSD (themylogin) +.IP \[bu] 2 +Use the rclone HTTP client to support \f[C]\-\-dump\ headers\f[], +\f[C]\-\-tpslimit\f[] etc (Nick Craig\-Wood) +.IP \[bu] 2 +Use the s3 pacer for 0 delay in non error conditions (Nick Craig\-Wood) +.IP \[bu] 2 +Ignore directory markers (Nick Craig\-Wood) +.IP \[bu] 2 +Stop Mkdir attempting to create existing containers (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +B2 +.RS 2 +.IP \[bu] 2 +cleanup: will remove unfinished large files >24hrs old (Garry McNulty) +.IP \[bu] 2 +For a bucket limited application key check the bucket name (Nick +Craig\-Wood) +.RS 2 +.IP \[bu] 2 +before this, rclone would use the authorised bucket regardless of what +you put on the command line +.RE +.IP \[bu] 2 +Added \f[C]\-\-b2\-disable\-checksum\f[] flag (Wojciech Smigielski) +.RS 2 +.IP \[bu] 2 +this enables large files to be uploaded without a SHA\-1 hash for speed +reasons +.RE +.RE +.IP \[bu] 2 +Drive +.RS 2 +.IP \[bu] 2 +Set default pacer to 100ms for 10 tps (Nick Craig\-Wood) +.RS 2 +.IP \[bu] 2 +This fits the Google defaults much better and reduces the 403 errors +massively +.IP \[bu] 2 +Add \f[C]\-\-drive\-pacer\-min\-sleep\f[] and +\f[C]\-\-drive\-pacer\-burst\f[] to control the pacer +.RE +.IP \[bu] 2 +Improve ChangeNotify support for items with multiple parents (Fabian +Möller) +.IP \[bu] 2 +Fix ListR for items with multiple parents \- this fixes oddities with +\f[C]vfs/refresh\f[] (Fabian Möller) +.IP \[bu] 2 +Fix using \f[C]\-\-drive\-impersonate\f[] and appfolders (Nick +Craig\-Wood) +.IP \[bu] 2 +Fix google docs in rclone mount for some (not all) applications (Nick +Craig\-Wood) +.RE +.IP \[bu] 2 +Dropbox +.RS 2 +.IP \[bu] 2 +Retry\-After support for Dropbox backend (Mathieu Carbou) +.RE +.IP \[bu] 2 +FTP +.RS 2 +.IP \[bu] 2 +Wait for 60 seconds for a connection to Close then declare it dead (Nick +Craig\-Wood) +.RS 2 +.IP \[bu] 2 +helps with indefinite hangs on some FTP servers +.RE +.RE +.IP \[bu] 2 +Google Cloud Storage +.RS 2 +.IP \[bu] 2 +Update google cloud storage endpoints (weetmuts) +.RE +.IP \[bu] 2 +HTTP +.RS 2 +.IP \[bu] 2 +Add an example with username and password which is supported but +wasn\[aq]t documented (Nick Craig\-Wood) +.IP \[bu] 2 +Fix backend with \f[C]\-\-files\-from\f[] and non\-existent files (Nick +Craig\-Wood) +.RE +.IP \[bu] 2 +Hubic +.RS 2 +.IP \[bu] 2 +Make error message more informative if authentication fails (Nick +Craig\-Wood) +.RE +.IP \[bu] 2 +Jottacloud +.RS 2 +.IP \[bu] 2 +Resume and deduplication support (Oliver Heyme) +.IP \[bu] 2 +Use token auth for all API requests Don\[aq]t store password anymore +(Sebastian Bünger) +.IP \[bu] 2 +Add support for 2\-factor authentification (Sebastian Bünger) +.RE +.IP \[bu] 2 +Mega +.RS 2 +.IP \[bu] 2 +Implement v2 account login which fixes logins for newer Mega accounts +(Nick Craig\-Wood) +.IP \[bu] 2 +Return error if an unknown length file is attempted to be uploaded (Nick +Craig\-Wood) +.IP \[bu] 2 +Add new error codes for better error reporting (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +Onedrive +.RS 2 +.IP \[bu] 2 +Fix broken support for "shared with me" folders (Alex Chen) +.IP \[bu] 2 +Fix root ID not normalised (Cnly) +.IP \[bu] 2 +Return err instead of panic on unknown\-sized uploads (Cnly) +.RE +.IP \[bu] 2 +Qingstor +.RS 2 +.IP \[bu] 2 +Fix go routine leak on multipart upload errors (Nick Craig\-Wood) +.IP \[bu] 2 +Add upload chunk size/concurrency/cutoff control (Nick Craig\-Wood) +.IP \[bu] 2 +Default \f[C]\-\-qingstor\-upload\-concurrency\f[] to 1 to work around +bug (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Implement \f[C]\-\-s3\-upload\-cutoff\f[] for single part uploads below +this (Nick Craig\-Wood) +.IP \[bu] 2 +Change \f[C]\-\-s3\-upload\-concurrency\f[] default to 4 to increase +perfomance (Nick Craig\-Wood) +.IP \[bu] 2 +Add \f[C]\-\-s3\-bucket\-acl\f[] to control bucket ACL (Nick +Craig\-Wood) +.IP \[bu] 2 +Auto detect region for buckets on operation failure (Nick Craig\-Wood) +.IP \[bu] 2 +Add GLACIER storage class (William Cocker) +.IP \[bu] 2 +Add Scaleway to s3 documentation (Rémy Léone) +.IP \[bu] 2 +Add AWS endpoint eu\-north\-1 (weetmuts) +.RE +.IP \[bu] 2 +SFTP +.RS 2 +.IP \[bu] 2 +Add support for PEM encrypted private keys (Fabian Möller) +.IP \[bu] 2 +Add option to force the usage of an ssh\-agent (Fabian Möller) +.IP \[bu] 2 +Perform environment variable expansion on key\-file (Fabian Möller) +.IP \[bu] 2 +Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig\-Wood) +.IP \[bu] 2 +Fix rmdir deleting directory contents on some SFTP servers (Nick +Craig\-Wood) +.IP \[bu] 2 +Fix error on dangling symlinks (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +Swift +.RS 2 +.IP \[bu] 2 +Add \f[C]\-\-swift\-no\-chunk\f[] to disable segmented uploads in +rcat/mount (Nick Craig\-Wood) +.IP \[bu] 2 +Introduce application credential auth support (kayrus) +.IP \[bu] 2 +Fix memory usage by slimming Object (Nick Craig\-Wood) +.IP \[bu] 2 +Fix extra requests on upload (Nick Craig\-Wood) +.IP \[bu] 2 +Fix reauth on big files (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +Union +.RS 2 +.IP \[bu] 2 +Fix poll\-interval not working (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Support About which means rclone mount will show the correct disk size +(Nick Craig\-Wood) +.IP \[bu] 2 +Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick +Craig\-Wood) +.IP \[bu] 2 +Fail soft on time parsing errors (Nick Craig\-Wood) +.IP \[bu] 2 +Fix infinite loop on failed directory creation (Nick Craig\-Wood) +.IP \[bu] 2 +Fix identification of directories for Bitrix Site Manager (Nick +Craig\-Wood) +.IP \[bu] 2 +Fix upload of 0 length files on some servers (Nick Craig\-Wood) +.IP \[bu] 2 +Fix if MKCOL fails with 423 Locked assume the directory exists (Nick +Craig\-Wood) +.RE .SS v1.45 \- 2018\-11\-24 .IP \[bu] 2 New backends @@ -23029,9 +24533,8 @@ on all the remote storage systems. .SS Can I copy the config from one machine to another .PP Sure! Rclone stores all of its config in a single file. -If you want to find this file, the simplest way is to run -\f[C]rclone\ \-h\f[] and look at the help for the \f[C]\-\-config\f[] -flag which will tell you where it is. +If you want to find this file, run \f[C]rclone\ config\ file\f[] which +will tell you where it is. .PP See the remote setup docs (https://rclone.org/remote_setup/) for more info. @@ -23124,9 +24627,6 @@ reached over \f[C]https\f[]). Most public services will be using \f[C]https\f[], but you may wish to set both. .PP -If you ever use \f[C]FTP\f[] then you would need to set -\f[C]ftp_proxy\f[]. -.PP The content of the variable is \f[C]protocol://server:port\f[]. The protocol value is the one used to talk to the proxy server, itself, and is commonly either \f[C]http\f[] or \f[C]socks5\f[]. @@ -23160,6 +24660,8 @@ export\ no_proxy=localhost,127.0.0.0/8,my.host.name export\ NO_PROXY=$no_proxy \f[] .fi +.PP +Note that the ftp backend does not support \f[C]ftp_proxy\f[] yet. .SS Rclone gives x509: failed to load system roots and no roots provided error .PP @@ -23569,6 +25071,7 @@ Antoine GIRARD Mateusz Piotrowski .IP \[bu] 2 Animosity022 + .IP \[bu] 2 Peter Baumgartner .IP \[bu] 2 @@ -23696,6 +25199,44 @@ Peter Kaminski Henry Ptasinski .IP \[bu] 2 Alexander +.IP \[bu] 2 +Garry McNulty +.IP \[bu] 2 +Mathieu Carbou +.IP \[bu] 2 +Mark Otway +.IP \[bu] 2 +William Cocker <37018962+WilliamCocker@users.noreply.github.com> +.IP \[bu] 2 +François Leurent <131.js@cloudyks.org> +.IP \[bu] 2 +Arkadius Stefanski +.IP \[bu] 2 +Jay +.IP \[bu] 2 +andrea rota +.IP \[bu] 2 +nicolov +.IP \[bu] 2 +Dario Guzik +.IP \[bu] 2 +qip +.IP \[bu] 2 +yair\@unicorn +.IP \[bu] 2 +Matt Robinson +.IP \[bu] 2 +kayrus +.IP \[bu] 2 +Rémy Léone +.IP \[bu] 2 +Wojciech Smigielski +.IP \[bu] 2 +weetmuts +.IP \[bu] 2 +Jonathan +.IP \[bu] 2 +James Carpenter .SH Contact the rclone project .SS Forum .PP