rclone/docs/content/flags.md
2019-08-26 15:25:20 +01:00

34 KiB
Executable file

title description date
Global Flags Rclone Global Flags 2019-08-26T15:19:45+01:00

Global Flags

This describes the global flags available to every rclone command split into two groups, non backend and backend flags.

Non Backend Flags

These flags are available for every command.

      --ask-password                         Allow prompt for password for encrypted configuration. (default true)
      --auto-confirm                         If enabled, do not request console confirmation.
      --backup-dir string                    Make backups into hierarchy based in DIR.
      --bind string                          Local address to bind to for outgoing connections, IPv4, IPv6 or name.
      --buffer-size SizeSuffix               In memory buffer size when reading files for each --transfer. (default 16M)
      --bwlimit BwTimetable                  Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
      --ca-cert string                       CA certificate used to verify servers
      --cache-dir string                     Directory rclone will use for caching. (default "$HOME/.cache/rclone")
      --checkers int                         Number of checkers to run in parallel. (default 8)
  -c, --checksum                             Skip based on checksum (if available) & size, not mod-time & size
      --client-cert string                   Client SSL certificate (PEM) for mutual TLS auth
      --client-key string                    Client SSL private key (PEM) for mutual TLS auth
      --compare-dest string                  use DIR to server side copy flies from.
      --config string                        Config file. (default "$HOME/.config/rclone/rclone.conf")
      --contimeout duration                  Connect timeout (default 1m0s)
      --copy-dest string                     Compare dest to DIR also.
      --cpuprofile string                    Write cpu profile to file
      --delete-after                         When synchronizing, delete files on destination after transferring (default)
      --delete-before                        When synchronizing, delete files on destination before transferring
      --delete-during                        When synchronizing, delete files during transfer
      --delete-excluded                      Delete files on dest excluded from sync
      --disable string                       Disable a comma separated list of features.  Use help to see a list.
  -n, --dry-run                              Do a trial run with no permanent changes
      --dump DumpFlags                       List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
      --dump-bodies                          Dump HTTP headers and bodies - may contain sensitive info
      --dump-headers                         Dump HTTP headers - may contain sensitive info
      --exclude stringArray                  Exclude files matching pattern
      --exclude-from stringArray             Read exclude patterns from file
      --exclude-if-present string            Exclude directories if filename is present
      --fast-list                            Use recursive list if available. Uses more memory but fewer transactions.
      --files-from stringArray               Read list of source-file names from file
  -f, --filter stringArray                   Add a file-filtering rule
      --filter-from stringArray              Read filtering patterns from a file
      --ignore-case                          Ignore case in filters (case insensitive)
      --ignore-case-sync                     Ignore case when synchronizing
      --ignore-checksum                      Skip post copy check of checksums.
      --ignore-errors                        delete even if there are I/O errors
      --ignore-existing                      Skip all files that exist on destination
      --ignore-size                          Ignore size when skipping use mod-time or checksum.
  -I, --ignore-times                         Don't skip files that match size and time - transfer all files
      --immutable                            Do not modify files. Fail if existing files have been modified.
      --include stringArray                  Include files matching pattern
      --include-from stringArray             Read include patterns from file
      --log-file string                      Log everything to this file
      --log-format string                    Comma separated list of log format options (default "date,time")
      --log-level string                     Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
      --low-level-retries int                Number of low level retries to do. (default 10)
      --max-age Duration                     Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
      --max-backlog int                      Maximum number of objects in sync or check backlog. (default 10000)
      --max-delete int                       When synchronizing, limit the number of deletes (default -1)
      --max-depth int                        If set limits the recursion depth to this. (default -1)
      --max-size SizeSuffix                  Only transfer files smaller than this in k or suffix b|k|M|G (default off)
      --max-stats-groups int                 Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
      --max-transfer SizeSuffix              Maximum size of data to transfer. (default off)
      --memprofile string                    Write memory profile to file
      --min-age Duration                     Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
      --min-size SizeSuffix                  Only transfer files bigger than this in k or suffix b|k|M|G (default off)
      --modify-window duration               Max time diff to be considered the same (default 1ns)
      --multi-thread-cutoff SizeSuffix       Use multi-thread downloads for files above this size. (default 250M)
      --multi-thread-streams int             Max number of streams to use for multi-thread downloads. (default 4)
      --no-check-certificate                 Do not verify the server SSL certificate. Insecure.
      --no-gzip-encoding                     Don't set Accept-Encoding: gzip.
      --no-traverse                          Don't traverse destination file system on copy.
      --no-update-modtime                    Don't update destination mod-time if files identical.
  -P, --progress                             Show progress during transfer.
  -q, --quiet                                Print as little stuff as possible
      --rc                                   Enable the remote control server.
      --rc-addr string                       IPaddress:Port or :Port to bind server to. (default "localhost:5572")
      --rc-allow-origin string               Set the allowed origin for CORS.
      --rc-baseurl string                    Prefix for URLs - leave blank for root.
      --rc-cert string                       SSL PEM key (concatenation of certificate and CA certificate)
      --rc-client-ca string                  Client certificate authority to verify clients with
      --rc-files string                      Path to local files to serve on the HTTP server.
      --rc-htpasswd string                   htpasswd file - if not provided no authentication is done
      --rc-job-expire-duration duration      expire finished async jobs older than this value (default 1m0s)
      --rc-job-expire-interval duration      interval to check for expired async jobs (default 10s)
      --rc-key string                        SSL PEM Private key
      --rc-max-header-bytes int              Maximum size of request header (default 4096)
      --rc-no-auth                           Don't require auth for certain methods.
      --rc-pass string                       Password for authentication.
      --rc-realm string                      realm for authentication (default "rclone")
      --rc-serve                             Enable the serving of remote objects.
      --rc-server-read-timeout duration      Timeout for server reading data (default 1h0m0s)
      --rc-server-write-timeout duration     Timeout for server writing data (default 1h0m0s)
      --rc-user string                       User name for authentication.
      --rc-web-fetch-url string              URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
      --rc-web-gui                           Launch WebGUI on localhost
      --rc-web-gui-update                    Update / Force update to latest version of web gui
      --retries int                          Retry operations this many times if they fail (default 3)
      --retries-sleep duration               Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
      --size-only                            Skip based on size only, not mod-time or checksum
      --stats duration                       Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
      --stats-file-name-length int           Max file name length in stats. 0 for no limit (default 45)
      --stats-log-level string               Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
      --stats-one-line                       Make the stats fit on one line.
      --stats-one-line-date                  Enables --stats-one-line and add current date/time prefix.
      --stats-one-line-date-format string    Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format
      --stats-unit string                    Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
      --streaming-upload-cutoff SizeSuffix   Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
      --suffix string                        Suffix to add to changed files.
      --suffix-keep-extension                Preserve the extension when using --suffix.
      --syslog                               Use Syslog for logging
      --syslog-facility string               Facility for syslog, eg KERN,USER,... (default "DAEMON")
      --timeout duration                     IO idle timeout (default 5m0s)
      --tpslimit float                       Limit HTTP transactions per second to this.
      --tpslimit-burst int                   Max burst of transactions for --tpslimit. (default 1)
      --track-renames                        When synchronizing, track file renames and do a server side move if possible
      --transfers int                        Number of file transfers to run in parallel. (default 4)
  -u, --update                               Skip files that are newer on the destination.
      --use-cookies                          Enable session cookiejar.
      --use-json-log                         Use json log format.
      --use-mmap                             Use mmap allocator (see docs).
      --use-server-modtime                   Use server modified time instead of object metadata
      --user-agent string                    Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0")
  -v, --verbose count                        Print lots more stuff (repeat for more)

Backend Flags

These flags are available for every command. They control the backends and may be set in the config file.

      --acd-auth-url string                          Auth server URL.
      --acd-client-id string                         Amazon Application Client ID.
      --acd-client-secret string                     Amazon Application Client Secret.
      --acd-templink-threshold SizeSuffix            Files >= this size will be downloaded via their tempLink. (default 9G)
      --acd-token-url string                         Token server url.
      --acd-upload-wait-per-gb Duration              Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
      --alias-remote string                          Remote or path to alias.
      --azureblob-access-tier string                 Access tier of blob: hot, cool or archive.
      --azureblob-account string                     Storage Account Name (leave blank to use SAS URL or Emulator)
      --azureblob-chunk-size SizeSuffix              Upload chunk size (<= 100MB). (default 4M)
      --azureblob-endpoint string                    Endpoint for the service
      --azureblob-key string                         Storage Account Key (leave blank to use SAS URL or Emulator)
      --azureblob-list-chunk int                     Size of blob list. (default 5000)
      --azureblob-sas-url string                     SAS URL for container level access only
      --azureblob-upload-cutoff SizeSuffix           Cutoff for switching to chunked upload (<= 256MB). (default 256M)
      --azureblob-use-emulator                       Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
      --b2-account string                            Account ID or Application Key ID
      --b2-chunk-size SizeSuffix                     Upload chunk size. Must fit in memory. (default 96M)
      --b2-disable-checksum                          Disable checksums for large (> upload cutoff) files
      --b2-download-auth-duration Duration           Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
      --b2-download-url string                       Custom endpoint for downloads.
      --b2-endpoint string                           Endpoint for the service.
      --b2-hard-delete                               Permanently delete files on remote removal, otherwise hide files.
      --b2-key string                                Application Key
      --b2-test-mode string                          A flag string for X-Bz-Test-Mode header for debugging.
      --b2-upload-cutoff SizeSuffix                  Cutoff for switching to chunked upload. (default 200M)
      --b2-versions                                  Include old versions in directory listings.
      --box-client-id string                         Box App Client Id.
      --box-client-secret string                     Box App Client Secret
      --box-commit-retries int                       Max number of times to try committing a multipart file. (default 100)
      --box-upload-cutoff SizeSuffix                 Cutoff for switching to multipart upload (>= 50MB). (default 50M)
      --cache-chunk-clean-interval Duration          How often should the cache perform cleanups of the chunk storage. (default 1m0s)
      --cache-chunk-no-memory                        Disable the in-memory cache for storing chunks during streaming.
      --cache-chunk-path string                      Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
      --cache-chunk-size SizeSuffix                  The size of a chunk (partial file data). (default 5M)
      --cache-chunk-total-size SizeSuffix            The total size that the chunks can take up on the local disk. (default 10G)
      --cache-db-path string                         Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
      --cache-db-purge                               Clear all the cached data for this remote on start.
      --cache-db-wait-time Duration                  How long to wait for the DB to be available - 0 is unlimited (default 1s)
      --cache-info-age Duration                      How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
      --cache-plex-insecure string                   Skip all certificate verifications when connecting to the Plex server
      --cache-plex-password string                   The password of the Plex user
      --cache-plex-url string                        The URL of the Plex server
      --cache-plex-username string                   The username of the Plex user
      --cache-read-retries int                       How many times to retry a read from a cache storage. (default 10)
      --cache-remote string                          Remote to cache.
      --cache-rps int                                Limits the number of requests per second to the source FS (-1 to disable) (default -1)
      --cache-tmp-upload-path string                 Directory to keep temporary files until they are uploaded.
      --cache-tmp-wait-time Duration                 How long should files be stored in local cache before being uploaded (default 15s)
      --cache-workers int                            How many workers should run in parallel to download chunks. (default 4)
      --cache-writes                                 Cache file data on writes through the FS
  -L, --copy-links                                   Follow symlinks and copy the pointed to item.
      --crypt-directory-name-encryption              Option to either encrypt directory names or leave them intact. (default true)
      --crypt-filename-encryption string             How to encrypt the filenames. (default "standard")
      --crypt-password string                        Password or pass phrase for encryption.
      --crypt-password2 string                       Password or pass phrase for salt. Optional but recommended.
      --crypt-remote string                          Remote to encrypt/decrypt.
      --crypt-show-mapping                           For all files listed show how the names encrypt.
      --drive-acknowledge-abuse                      Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
      --drive-allow-import-name-change               Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
      --drive-alternate-export                       Use alternate export URLs for google documents export.,
      --drive-auth-owner-only                        Only consider files owned by the authenticated user.
      --drive-chunk-size SizeSuffix                  Upload chunk size. Must a power of 2 >= 256k. (default 8M)
      --drive-client-id string                       Google Application Client Id
      --drive-client-secret string                   Google Application Client Secret
      --drive-export-formats string                  Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
      --drive-formats string                         Deprecated: see export_formats
      --drive-impersonate string                     Impersonate this user when using a service account.
      --drive-import-formats string                  Comma separated list of preferred formats for uploading Google docs.
      --drive-keep-revision-forever                  Keep new head revision of each file forever.
      --drive-list-chunk int                         Size of listing chunk 100-1000. 0 to disable. (default 1000)
      --drive-pacer-burst int                        Number of API calls to allow without sleeping. (default 100)
      --drive-pacer-min-sleep Duration               Minimum time to sleep between API calls. (default 100ms)
      --drive-root-folder-id string                  ID of the root folder
      --drive-scope string                           Scope that rclone should use when requesting access from drive.
      --drive-server-side-across-configs             Allow server side operations (eg copy) to work across different drive configs.
      --drive-service-account-credentials string     Service Account Credentials JSON blob
      --drive-service-account-file string            Service Account Credentials JSON file path
      --drive-shared-with-me                         Only show files that are shared with me.
      --drive-size-as-quota                          Show storage quota usage for file size.
      --drive-skip-checksum-gphotos                  Skip MD5 checksum on Google photos and videos only.
      --drive-skip-gdocs                             Skip google documents in all listings.
      --drive-team-drive string                      ID of the Team Drive
      --drive-trashed-only                           Only show files that are in the trash.
      --drive-upload-cutoff SizeSuffix               Cutoff for switching to chunked upload (default 8M)
      --drive-use-created-date                       Use file created date instead of modified date.,
      --drive-use-trash                              Send files to the trash instead of deleting permanently. (default true)
      --drive-v2-download-min-size SizeSuffix        If Object's are greater, use drive v2 API to download. (default off)
      --dropbox-chunk-size SizeSuffix                Upload chunk size. (< 150M). (default 48M)
      --dropbox-client-id string                     Dropbox App Client Id
      --dropbox-client-secret string                 Dropbox App Client Secret
      --dropbox-impersonate string                   Impersonate this user when using a business account.
      --fichier-api-key string                       Your API Key, get it from https://1fichier.com/console/params.pl
      --fichier-shared-folder string                 If you want to download a shared folder, add this parameter
      --ftp-concurrency int                          Maximum number of FTP simultaneous connections, 0 for unlimited
      --ftp-host string                              FTP host to connect to
      --ftp-no-check-certificate                     Do not verify the TLS certificate of the server
      --ftp-pass string                              FTP password
      --ftp-port string                              FTP port, leave blank to use default (21)
      --ftp-tls                                      Use FTP over TLS (Implicit)
      --ftp-user string                              FTP username, leave blank for current username, $USER
      --gcs-bucket-acl string                        Access Control List for new buckets.
      --gcs-bucket-policy-only                       Access checks should use bucket-level IAM policies.
      --gcs-client-id string                         Google Application Client Id
      --gcs-client-secret string                     Google Application Client Secret
      --gcs-location string                          Location for the newly created buckets.
      --gcs-object-acl string                        Access Control List for new objects.
      --gcs-project-number string                    Project number.
      --gcs-service-account-file string              Service Account Credentials JSON file path
      --gcs-storage-class string                     The storage class to use when storing objects in Google Cloud Storage.
      --gphotos-client-id string                     Google Application Client Id
      --gphotos-client-secret string                 Google Application Client Secret
      --gphotos-read-only                            Set to make the Google Photos backend read only.
      --gphotos-read-size                            Set to read the size of media items.
      --http-headers CommaSepList                    Set HTTP headers for all transactions
      --http-no-slash                                Set this if the site doesn't end directories with /
      --http-url string                              URL of http host to connect to
      --hubic-chunk-size SizeSuffix                  Above this size files will be chunked into a _segments container. (default 5G)
      --hubic-client-id string                       Hubic Client Id
      --hubic-client-secret string                   Hubic Client Secret
      --hubic-no-chunk                               Don't chunk files during streaming upload.
      --jottacloud-hard-delete                       Delete files permanently rather than putting them into the trash.
      --jottacloud-md5-memory-limit SizeSuffix       Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
      --jottacloud-unlink                            Remove existing public link to file/folder with link command rather than creating.
      --jottacloud-upload-resume-limit SizeSuffix    Files bigger than this can be resumed if the upload fail's. (default 10M)
      --koofr-endpoint string                        The Koofr API endpoint to use (default "https://app.koofr.net")
      --koofr-mountid string                         Mount ID of the mount to use. If omitted, the primary mount is used.
      --koofr-password string                        Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
      --koofr-setmtime                               Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true)
      --koofr-user string                            Your Koofr user name
  -l, --links                                        Translate symlinks to/from regular files with a '.rclonelink' extension
      --local-case-insensitive                       Force the filesystem to report itself as case insensitive
      --local-case-sensitive                         Force the filesystem to report itself as case sensitive.
      --local-no-check-updated                       Don't check to see if the files change during upload
      --local-no-unicode-normalization               Don't apply unicode normalization to paths and filenames (Deprecated)
      --local-nounc string                           Disable UNC (long path names) conversion on Windows
      --mega-debug                                   Output more debug from Mega.
      --mega-hard-delete                             Delete files permanently rather than putting them into the trash.
      --mega-pass string                             Password.
      --mega-user string                             User name
  -x, --one-file-system                              Don't cross filesystem boundaries (unix/macOS only).
      --onedrive-chunk-size SizeSuffix               Chunk size to upload files with - must be multiple of 320k. (default 10M)
      --onedrive-client-id string                    Microsoft App Client Id
      --onedrive-client-secret string                Microsoft App Client Secret
      --onedrive-drive-id string                     The ID of the drive to use
      --onedrive-drive-type string                   The type of the drive ( personal | business | documentLibrary )
      --onedrive-expose-onenote-files                Set to make OneNote files show up in directory listings.
      --opendrive-password string                    Password.
      --opendrive-username string                    Username
      --pcloud-client-id string                      Pcloud App Client Id
      --pcloud-client-secret string                  Pcloud App Client Secret
      --qingstor-access-key-id string                QingStor Access Key ID
      --qingstor-chunk-size SizeSuffix               Chunk size to use for uploading. (default 4M)
      --qingstor-connection-retries int              Number of connection retries. (default 3)
      --qingstor-endpoint string                     Enter a endpoint URL to connection QingStor API.
      --qingstor-env-auth                            Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
      --qingstor-secret-access-key string            QingStor Secret Access Key (password)
      --qingstor-upload-concurrency int              Concurrency for multipart uploads. (default 1)
      --qingstor-upload-cutoff SizeSuffix            Cutoff for switching to chunked upload (default 200M)
      --qingstor-zone string                         Zone to connect to.
      --s3-access-key-id string                      AWS Access Key ID.
      --s3-acl string                                Canned ACL used when creating buckets and storing or copying objects.
      --s3-bucket-acl string                         Canned ACL used when creating buckets.
      --s3-chunk-size SizeSuffix                     Chunk size to use for uploading. (default 5M)
      --s3-disable-checksum                          Don't store MD5 checksum with object metadata
      --s3-endpoint string                           Endpoint for S3 API.
      --s3-env-auth                                  Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
      --s3-force-path-style                          If true use path style access if false use virtual hosted style. (default true)
      --s3-location-constraint string                Location constraint - must be set to match the Region.
      --s3-provider string                           Choose your S3 provider.
      --s3-region string                             Region to connect to.
      --s3-secret-access-key string                  AWS Secret Access Key (password)
      --s3-server-side-encryption string             The server-side encryption algorithm used when storing this object in S3.
      --s3-session-token string                      An AWS session token
      --s3-sse-kms-key-id string                     If using KMS ID you must provide the ARN of Key.
      --s3-storage-class string                      The storage class to use when storing new objects in S3.
      --s3-upload-concurrency int                    Concurrency for multipart uploads. (default 4)
      --s3-upload-cutoff SizeSuffix                  Cutoff for switching to chunked upload (default 200M)
      --s3-use-accelerate-endpoint                   If true use the AWS S3 accelerated endpoint.
      --s3-v2-auth                                   If true use v2 authentication.
      --sftp-ask-password                            Allow asking for SFTP password when needed.
      --sftp-disable-hashcheck                       Disable the execution of SSH commands to determine if remote file hashing is available.
      --sftp-host string                             SSH host to connect to
      --sftp-key-file string                         Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
      --sftp-key-file-pass string                    The passphrase to decrypt the PEM-encoded private key file.
      --sftp-key-use-agent                           When set forces the usage of the ssh-agent.
      --sftp-md5sum-command string                   The command used to read md5 hashes. Leave blank for autodetect.
      --sftp-pass string                             SSH password, leave blank to use ssh-agent.
      --sftp-path-override string                    Override path used by SSH connection.
      --sftp-port string                             SSH port, leave blank to use default (22)
      --sftp-set-modtime                             Set the modified time on the remote if set. (default true)
      --sftp-sha1sum-command string                  The command used to read sha1 hashes. Leave blank for autodetect.
      --sftp-use-insecure-cipher                     Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
      --sftp-user string                             SSH username, leave blank for current username, ncw
      --skip-links                                   Don't warn about skipped symlinks.
      --swift-application-credential-id string       Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
      --swift-application-credential-name string     Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
      --swift-application-credential-secret string   Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
      --swift-auth string                            Authentication URL for server (OS_AUTH_URL).
      --swift-auth-token string                      Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
      --swift-auth-version int                       AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
      --swift-chunk-size SizeSuffix                  Above this size files will be chunked into a _segments container. (default 5G)
      --swift-domain string                          User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
      --swift-endpoint-type string                   Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
      --swift-env-auth                               Get swift credentials from environment variables in standard OpenStack form.
      --swift-key string                             API key or password (OS_PASSWORD).
      --swift-no-chunk                               Don't chunk files during streaming upload.
      --swift-region string                          Region name - optional (OS_REGION_NAME)
      --swift-storage-policy string                  The storage policy to use when creating a new container
      --swift-storage-url string                     Storage URL - optional (OS_STORAGE_URL)
      --swift-tenant string                          Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
      --swift-tenant-domain string                   Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
      --swift-tenant-id string                       Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
      --swift-user string                            User name to log in (OS_USERNAME).
      --swift-user-id string                         User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
      --union-remotes string                         List of space separated remotes.
      --webdav-bearer-token string                   Bearer token instead of user/pass (eg a Macaroon)
      --webdav-bearer-token-command string           Command to run to get a bearer token
      --webdav-pass string                           Password.
      --webdav-url string                            URL of http host to connect to
      --webdav-user string                           User name
      --webdav-vendor string                         Name of the Webdav site/service/software you are using
      --yandex-client-id string                      Yandex Client Id
      --yandex-client-secret string                  Yandex Client Secret
      --yandex-unlink                                Remove existing public link to file/folder with link command rather than creating.