Version v1.54.0

This commit is contained in:
Nick Craig-Wood 2021-02-02 13:42:35 +00:00
parent 8b41dfa50a
commit 7f5ee5d81f
68 changed files with 49017 additions and 29240 deletions

20496
MANUAL.html generated

File diff suppressed because one or more lines are too long

5207
MANUAL.md generated

File diff suppressed because it is too large Load diff

21032
MANUAL.txt generated

File diff suppressed because it is too large Load diff

View file

@ -160,6 +160,26 @@ Storage Account Name (leave blank to use SAS URL or Emulator)
- Type: string
- Default: ""
#### --azureblob-service-principal-file
Path to file containing credentials for use with a service principal.
Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
$ az sp create-for-rbac --name "<name>" \
--role "Storage Blob Data Owner" \
--scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
> azure-principal.json
See [Use Azure CLI to assign an Azure role for access to blob and queue data](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli)
for more details.
- Config: service_principal_file
- Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
- Type: string
- Default: ""
#### --azureblob-key
Storage Account Key (leave blank to use SAS URL or Emulator)
@ -179,6 +199,24 @@ SAS URL for container level access only
- Type: string
- Default: ""
#### --azureblob-use-msi
Use a managed service identity to authenticate (only works in Azure)
When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
to authenticate to Azure Storage instead of a SAS token or account key.
If the VM(SS) on which this program is running has a system-assigned identity, it will
be used by default. If the resource has no system-assigned but exactly one user-assigned identity,
the user-assigned identity will be used by default. If the resource has multiple user-assigned
identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
msi_client_id, or msi_mi_res_id parameters.
- Config: use_msi
- Env Var: RCLONE_AZUREBLOB_USE_MSI
- Type: bool
- Default: false
#### --azureblob-use-emulator
Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
@ -192,6 +230,33 @@ Uses local storage emulator if provided as 'true' (leave blank if using real azu
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
#### --azureblob-msi-object-id
Object ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_mi_res_id specified.
- Config: msi_object_id
- Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID
- Type: string
- Default: ""
#### --azureblob-msi-client-id
Object ID of the user-assigned MSI to use, if any. Leave blank if msi_object_id or msi_mi_res_id specified.
- Config: msi_client_id
- Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID
- Type: string
- Default: ""
#### --azureblob-msi-mi-res-id
Azure resource ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_object_id specified.
- Config: msi_mi_res_id
- Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID
- Type: string
- Default: ""
#### --azureblob-endpoint
Endpoint for the service
@ -202,6 +267,15 @@ Leave blank normally.
- Type: string
- Default: ""
#### --azureblob-upload-cutoff
Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
- Type: string
- Default: ""
#### --azureblob-chunk-size
Upload chunk size (<= 100MB).
@ -250,10 +324,28 @@ tiering blob to "Hot" or "Cool".
- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
- Type: string
- Default: ""
- Examples:
- "Hot"
- "Cool"
- "Archive"
#### --azureblob-archive-tier-delete
Delete archive tier blobs before overwriting.
Archive tier blobs cannot be updated. So without this flag, if you
attempt to update an archive tier blob, then rclone will produce the
error:
can't update archive tier blob without --azureblob-archive-tier-delete
With this flag set then before rclone attempts to overwrite an archive
tier blob, it will delete the existing blob before uploading its
replacement. This has the potential for data loss if the upload fails
(unlike updating a normal blob) and also may cost more since deleting
archive tier blobs early may be chargable.
- Config: archive_tier_delete
- Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE
- Type: bool
- Default: false
#### --azureblob-disable-checksum

View file

@ -461,7 +461,9 @@ Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
This is probably only useful for a public bucket.
Rclone works with private buckets by sending an "Authorization" header.
If the custom endpoint rewrites the requests for authentication,
e.g., in Cloudflare Workers, this header needs to be handled properly.
Leave blank if you want to use the endpoint provided by Backblaze.
- Config: download_url

View file

@ -5,6 +5,213 @@ description: "Rclone Changelog"
# Changelog
## v1.54.0 - 2021-02-02
[See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0)
* New backends
* Compression remote (experimental) (buengese)
* Enterprise File Fabric (Nick Craig-Wood)
* This work was sponsored by [Storage Made Easy](https://storagemadeeasy.com/)
* HDFS (Hadoop Distributed File System) (Yury Stankevich)
* Zoho workdrive (buengese)
* New Features
* Deglobalise the config (Nick Craig-Wood)
* Global config now read from the context
* This will enable passing of global config via the rc
* This work was sponsored by [Digitalis](digitalis.io)
* Add `--bwlimit` for upload and download (Nick Craig-Wood)
* Obey bwlimit in http Transport for better limiting
* Enhance systemd integration (Hekmon)
* log level identification, manual activation with flag, automatic systemd launch detection
* Don't compile systemd log integration for non unix systems (Benjamin Gustin)
* Add a `--download` flag to md5sum/sha1sum/hashsum to force rclone to download and hash files locally (lostheli)
* Add `--progress-terminal-title` to print ETA to terminal title (LaSombra)
* Make backend env vars show in help as the defaults for backend flags (Nick Craig-Wood)
* build
* Raise minimum go version to go1.12 (Nick Craig-Wood)
* dedupe
* Add `--by-hash` to dedupe on content hash not file name (Nick Craig-Wood)
* Add `--dedupe-mode list` to just list dupes, changing nothing (Nick Craig-Wood)
* Add warning if used on a remote which can't have duplicate names (Nick Craig-Wood)
* fs
* Add Shutdown optional method for backends (Nick Craig-Wood)
* When using `--files-from` check files concurrently (zhucan)
* Accumulate stats when using `--dry-run` (Ingo Weiss)
* Always show stats when using `--dry-run` or `--interactive` (Nick Craig-Wood)
* Add support for flag `--no-console` on windows to hide the console window (albertony)
* genautocomplete: Add support to output to stdout (Ingo)
* ncdu
* Highlight read errors instead of aborting (Claudio Bantaloukas)
* Add sort by average size in directory (Adam Plánský)
* Add toggle option for average s3ize in directory - key 'a' (Adam Plánský)
* Add empty folder flag into ncdu browser (Adam Plánský)
* Add `!` (errror) and `.` (unreadable) file flags to go with `e` (empty) (Nick Craig-Wood)
* obscure: Make `rclone osbcure -` ignore newline at end of line (Nick Craig-Wood)
* operations
* Add logs when need to upload files to set mod times (Nick Craig-Wood)
* Move and copy log name of the destination object in verbose (Adam Plánský)
* Add size if known to skipped items and JSON log (Nick Craig-Wood)
* rc
* Prefer actual listener address if using ":port" or "addr:0" only (Nick Craig-Wood)
* Add listener for finished jobs (Aleksandar Jankovic)
* serve ftp: Add options to enable TLS (Deepak Sah)
* serve http/webdav: Redirect requests to the base url without the / (Nick Craig-Wood)
* serve restic: Implement object cache (Nick Craig-Wood)
* stats: Add counter for deleted directories (Nick Craig-Wood)
* sync: Only print "There was nothing to transfer" if no errors (Nick Craig-Wood)
* webui
* Prompt user for updating webui if an update is available (Chaitanya Bankanhal)
* Fix plugins initialization (negative0)
* Bug Fixes
* fs
* Fix nil pointer on copy & move operations directly to remote (Anagh Kumar Baranwal)
* Fix parsing of .. when joining remotes (Nick Craig-Wood)
* log: Fix enabling systemd logging when using `--log-file` (Nick Craig-Wood)
* check
* Make the error count match up in the log message (Nick Craig-Wood)
* move: Fix data loss when source and destination are the same object (Nick Craig-Wood)
* operations
* Fix `--cutof-mode` hard not cutting off immediately (Nick Craig-Wood)
* Fix `--immutable` error message (Nick Craig-Wood)
* sync
* Fix `--cutoff-mode` soft & cautious so it doesn't end the transfer early (Nick Craig-Wood)
* Fix `--immutable` errors retrying many times (Nick Craig-Wood)
* Docs
* Many fixes and a rewrite of the filtering docs (edwardxml)
* Many spelling and grammar fixes (Josh Soref)
* Doc fixes for commands delete, purge, rmdir, rmdirs and mount (albertony)
* And thanks to these people for many doc fixes too numerous to list
* Ameer Dawood, Antoine GIRARD, Bob Bagwill, Christopher Stewart
* CokeMine, David, Dov Murik, Durval Menezes, Evan Harris, gtorelly
* Ilyess Bachiri, Janne Johansson, Kerry Su, Marcin Zelent,
* Martin Michlmayr, Milly, Sơn Trần-Nguyễn
* Mount
* Update systemd status with cache stats (Hekmon)
* Disable bazil/fuse based mount on macOS (Nick Craig-Wood)
* Make `rclone mount` actually run `rclone cmount` under macOS (Nick Craig-Wood)
* Implement mknod to make NFS file creation work (Nick Craig-Wood)
* Make sure we don't call umount more than once (Nick Craig-Wood)
* More user friendly mounting as network drive on windows (albertony)
* Detect if uid or gid are set in same option string: -o uid=123,gid=456 (albertony)
* Don't attempt to unmount if fs has been destroyed already (Nick Craig-Wood)
* VFS
* Fix virtual entries causing deleted files to still appear (Nick Craig-Wood)
* Fix "file already exists" error for stale cache files (Nick Craig-Wood)
* Fix file leaks with `--vfs-cache-mode` full and `--buffer-size 0` (Nick Craig-Wood)
* Fix invalid cache path on windows when using :backend: as remote (albertony)
* Local
* Continue listing files/folders when a circular symlink is detected (Manish Gupta)
* New flag `--local-zero-size-links` to fix sync on some virtual filesystems (Riccardo Iaconelli)
* Azure Blob
* Add support for service principals (James Lim)
* Add support for managed identities (Brad Ackerman)
* Add examples for access tier (Bob Pusateri)
* Utilize the streaming capabilities from the SDK for multipart uploads (Denis Neuling)
* Fix setting of mime types (Nick Craig-Wood)
* Fix crash when listing outside a SAS URL's root (Nick Craig-Wood)
* Delete archive tier blobs before update if `--azureblob-archive-tier-delete` (Nick Craig-Wood)
* Fix crash on startup (Nick Craig-Wood)
* Fix memory usage by upgrading the SDK to v0.13.0 and implementing a TransferManager (Nick Craig-Wood)
* Require go1.14+ to compile due to SDK changes (Nick Craig-Wood)
* B2
* Make NewObject use less expensive API calls (Nick Craig-Wood)
* This will improve `--files-from` and `restic serve` in particular
* Fixed crash on an empty file name (lluuaapp)
* Box
* Fix NewObject for files that differ in case (Nick Craig-Wood)
* Fix finding directories in a case insentive way (Nick Craig-Wood)
* Chunker
* Skip long local hashing, hash in-transit (fixes) (Ivan Andreev)
* Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)
* Fix case-insensitive NewObject, test metadata detection (Ivan Andreev)
* Drive
* Implement `rclone backend copyid` command for copying files by ID (Nick Craig-Wood)
* Added flag `--drive-stop-on-download-limit` to stop transfers when the download limit is exceeded (Anagh Kumar Baranwal)
* Implement CleanUp workaround for team drives (buengese)
* Allow shortcut resolution and creation to be retried (Nick Craig-Wood)
* Log that emptying the trash can take some time (Nick Craig-Wood)
* Add xdg office icons to xdg desktop files (Pau Rodriguez-Estivill)
* Dropbox
* Add support for viewing shared files and folders (buengese)
* Enable short lived access tokens (Nick Craig-Wood)
* Implement IDer on Objects so `rclone lsf` etc can read the IDs (buengese)
* Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)
* Make malformed_path errors from too long files not retriable (Nick Craig-Wood)
* Test file name length before upload to fix upload loop (Nick Craig-Wood)
* Fichier
* Set Features ReadMimeType to true as Object.MimeType is supported (Nick Craig-Wood)
* FTP
* Add `--ftp-disable-msld` option to ignore MLSD for really old servers (Nick Craig-Wood)
* Make `--tpslimit apply` (Nick Craig-Wood)
* Google Cloud Storage
* Storage class object header support (Laurens Janssen)
* Fix anonymous client to use rclone's HTTP client (Nick Craig-Wood)
* Fix `Entry doesn't belong in directory "" (same as directory) - ignoring` (Nick Craig-Wood)
* Googlephotos
* New flag `--gphotos-include-archived` to show archived photos as well (Nicolas Rueff)
* Jottacloud
* Don't erroneously report support for writing mime types (buengese)
* Add support for Telia Cloud (Patrik Nordlén)
* Mailru
* Accept special folders eg camera-upload (Ivan Andreev)
* Avoid prehashing of large local files (Ivan Andreev)
* Fix uploads after recent changes on server (Ivan Andreev)
* Fix range requests after June 2020 changes on server (Ivan Andreev)
* Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev)
* Remove deprecated protocol quirks (Ivan Andreev)
* Memory
* Fix setting of mime types (Nick Craig-Wood)
* Onedrive
* Add support for China region operated by 21vianet and other regional suppliers (NyaMisty)
* Warn on gateway timeout errors (Nick Craig-Wood)
* Fall back to normal copy if server-side copy unavailable (Alex Chen)
* Fix server-side copy completely disabled on OneDrive for Business (Cnly)
* (business only) workaround to replace existing file on server-side copy (Alex Chen)
* Enhance link creation with expiry, scope, type and password (Nick Craig-Wood)
* Remove % and # from the set of encoded characters (Alex Chen)
* Support addressing site by server-relative URL (kice)
* Opendrive
* Fix finding directories in a case insensitive way (Nick Craig-Wood)
* Pcloud
* Fix setting of mime types (Nick Craig-Wood)
* Premiumizeme
* Fix finding directories in a case insensitive way (Nick Craig-Wood)
* Qingstor
* Fix error propagation in CleanUp (Nick Craig-Wood)
* Fix rclone cleanup (Nick Craig-Wood)
* S3
* Added `--s3-disable-http2` to disable http/2 (Anagh Kumar Baranwal)
* Complete SSE-C implementation (Nick Craig-Wood)
* Fix hashes on small files with AWS:KMS and SSE-C (Nick Craig-Wood)
* Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C (Nick Craig-Wood)
* Add `--s3-no-head parameter` to minimise transactions on upload (Nick Craig-Wood)
* Update docs with a Reducing Costs section (Nick Craig-Wood)
* Added error handling for error code 429 indicating too many requests (Anagh Kumar Baranwal)
* Add requester pays option (kelv)
* Fix copy multipart with v2 auth failing with 'SignatureDoesNotMatch' (Louis Koo)
* SFTP
* Allow cert based auth via optional pubkey (Stephen Harris)
* Allow user to optionally check server hosts key to add security (Stephen Harris)
* Defer asking for user passwords until the SSH connection succeeds (Stephen Harris)
* Remember entered password in AskPass mode (Stephen Harris)
* Implement Shutdown method (Nick Craig-Wood)
* Implement keyboard interactive authentication (Nick Craig-Wood)
* Make `--tpslimit` apply (Nick Craig-Wood)
* Implement `--sftp-use-fstat` for unusual SFTP servers (Nick Craig-Wood)
* Sugarsync
* Fix NewObject for files that differ in case (Nick Craig-Wood)
* Fix finding directories in a case insentive way (Nick Craig-Wood)
* Swift
* Fix deletion of parts of Static Large Object (SLO) (Nguyễn Hữu Luân)
* Ensure partially uploaded large files are uploaded unless `--swift-leave-parts-on-error` (Nguyễn Hữu Luân)
* Tardigrade
* Upgrade to uplink v1.4.1 (Caleb Case)
* WebDAV
* Updated docs to show streaming to nextcloud is working (Durval Menezes)
* Yandex
* Set Features WriteMimeType to false as Yandex ignores mime types (Nick Craig-Wood)
## v1.53.4 - 2021-01-20
[See commits](https://github.com/rclone/rclone/compare/v1.53.3...v1.53.4)

View file

@ -39,15 +39,15 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone backend](/commands/rclone_backend/) - Run a backend specific command.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied.
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied.
* [rclone copyurl](/commands/rclone_copyurl/) - Copy url content to dest.
* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote.
* [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names.
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate filenames and delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone delete](/commands/rclone_delete/) - Remove the files in path.
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
@ -56,7 +56,7 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file.
* [rclone ls](/commands/rclone_ls/) - List the objects in the path with size and path.
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path.
* [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing
* [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing.
* [rclone lsjson](/commands/rclone_lsjson/) - List directories and objects in the path in JSON format.
* [rclone lsl](/commands/rclone_lsl/) - List the objects in path with modification time, size and path.
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
@ -65,12 +65,12 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
* [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest.
* [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface.
* [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone config file
* [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone config file.
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
* [rclone rc](/commands/rclone_rc/) - Run a command against a running rclone.
* [rclone rcat](/commands/rclone_rcat/) - Copies standard input to file on remote.
* [rclone rcd](/commands/rclone_rcd/) - Run rclone listening to remote control commands only.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the empty directory at path.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path.
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
* [rclone settier](/commands/rclone_settier/) - Changes storage class/tier of objects in remote.

View file

@ -12,10 +12,10 @@ Get quota information from the remote.
## Synopsis
Get quota information from the remote, like bytes used/free/quota and bytes
used in the trash. Not supported by all remotes.
`rclone about`prints quota information about a remote to standard
output. The output is typically used, free, quota and trash contents.
This will print to stdout something like this:
E.g. Typical output from`rclone about remote:`is:
Total: 17G
Used: 7.444G
@ -27,16 +27,15 @@ Where the fields are:
* Total: total size available.
* Used: total size used
* Free: total amount this user could upload.
* Trashed: total amount in the trash
* Other: total amount in other storage (eg Gmail, Google Photos)
* Free: total space available to this user.
* Trashed: total space used by trash
* Other: total amount in other storage (e.g. Gmail, Google Photos)
* Objects: total number of objects in the storage
Note that not all the backends provide all the fields - they will be
missing if they are not known for that backend. Where it is known
that the value is unlimited the value will also be omitted.
Not all backends print all fields. Information is not included if it is not
provided by a backend. Where the value is unlimited it is omitted.
Use the --full flag to see the numbers written out in full, eg
Applying a `--full` flag to the command prints the bytes in full, e.g.
Total: 18253611008
Used: 7993453766
@ -44,7 +43,7 @@ Use the --full flag to see the numbers written out in full, eg
Trashed: 104857602
Other: 8849156022
Use the --json flag for a computer readable output, eg
A `--json`flag generates conveniently computer readable output, e.g.
{
"total": 18253611008,
@ -54,6 +53,10 @@ Use the --json flag for a computer readable output, eg
"free": 1411001220
}
Not all backends support the `rclone about` command.
See [List of backends that do not support about](https://rclone.org/overview/#optional-features)
```
rclone about remote: [flags]

View file

@ -27,7 +27,7 @@ for more info).
rclone backend features remote:
Pass options to the backend command with -o. This should be key=value or key, eg:
Pass options to the backend command with -o. This should be key=value or key, e.g.:
rclone backend stats remote:path stats -o format=json -o long

View file

@ -26,10 +26,10 @@ Or like this to output any .txt files in dir or its subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
Use the --head flag to print characters only at the start, --tail for
the end and --offset and --count to print a section in the middle.
Use the `--head` flag to print characters only at the start, `--tail` for
the end and `--offset` and `--count` to print a section in the middle.
Note that if offset is negative it will count from the end, so
--offset -1 --count 1 is equivalent to --tail 1.
`--offset -1 --count 1` is equivalent to `--tail 1`.
```

View file

@ -16,10 +16,10 @@ Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files which don't
match. It doesn't alter the source or destination.
If you supply the --size-only flag, it will only compare the sizes not
If you supply the `--size-only` flag, it will only compare the sizes not
the hashes as well. Use this for a quick check.
If you supply the --download flag, it will download the data from
If you supply the `--download` flag, it will download the data from
both remotes and check them against each other on the fly. This can
be useful for remotes that don't support hashes or if you really want
to check all the data.
@ -29,7 +29,7 @@ the source match the files in the destination, not the other way
around. This means that extra files in the destination that are not in
the source will not be detected.
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--src-only`
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match`
and `--error` flags write paths, one per line, to the file name (or
stdout if it is `-`) supplied. What they write is described in the
help below. For example `--differ` will write all paths which are
@ -55,6 +55,7 @@ rclone check source:path dest:path [flags]
```
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by downloading rather than with hash.
--error string Report all files with errors (hashing or reading) to this file
-h, --help help for check
--match string Report all matching files to this file

View file

@ -1,13 +1,13 @@
---
title: "rclone cleanup"
description: "Clean up the remote if possible"
description: "Clean up the remote if possible."
slug: rclone_cleanup
url: /commands/rclone_cleanup/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cleanup/ and as part of making a release run "make commanddocs"
---
# rclone cleanup
Clean up the remote if possible
Clean up the remote if possible.
## Synopsis

View file

@ -34,7 +34,7 @@ whether the password is already obscured or not and put unobscured
passwords into the config file. If you want to be 100% certain that
the passwords get obscured then use the "--obscure" flag, or if you
are 100% certain you are already passing obscured passwords then use
"--no-obscure". You can also set osbscured passwords using the
"--no-obscure". You can also set obscured passwords using the
"rclone config password" command.
So for example if you wanted to configure a Google Drive remote but

View file

@ -9,10 +9,6 @@ url: /commands/rclone_config_delete/
Delete an existing remote `name`.
## Synopsis
Delete an existing remote `name`.
```
rclone config delete `name` [flags]
```

View file

@ -9,10 +9,6 @@ url: /commands/rclone_config_dump/
Dump the config file as JSON.
## Synopsis
Dump the config file as JSON.
```
rclone config dump [flags]
```

View file

@ -9,10 +9,6 @@ url: /commands/rclone_config_file/
Show path of configuration file in use.
## Synopsis
Show path of configuration file in use.
```
rclone config file [flags]
```

View file

@ -9,10 +9,6 @@ url: /commands/rclone_config_providers/
List in JSON format all the providers and options.
## Synopsis
List in JSON format all the providers and options.
```
rclone config providers [flags]
```

View file

@ -9,10 +9,6 @@ url: /commands/rclone_config_show/
Print (decrypted) config file, or the config for a single remote.
## Synopsis
Print (decrypted) config file, or the config for a single remote.
```
rclone config show [<remote>] [flags]
```

View file

@ -30,7 +30,7 @@ whether the password is already obscured or not and put unobscured
passwords into the config file. If you want to be 100% certain that
the passwords get obscured then use the "--obscure" flag, or if you
are 100% certain you are already passing obscured passwords then use
"--no-obscure". You can also set osbscured passwords using the
"--no-obscure". You can also set obscured passwords using the
"rclone config password" command.
If the remote uses OAuth the token will be updated, if you don't

View file

@ -1,13 +1,13 @@
---
title: "rclone copy"
description: "Copy files from source to dest, skipping already copied"
description: "Copy files from source to dest, skipping already copied."
slug: rclone_copy
url: /commands/rclone_copy/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copy/ and as part of making a release run "make commanddocs"
---
# rclone copy
Copy files from source to dest, skipping already copied
Copy files from source to dest, skipping already copied.
## Synopsis
@ -44,7 +44,7 @@ Not to
destpath/sourcepath/two.txt
If you are familiar with `rsync`, rclone always works as if you had
written a trailing / - meaning "copy the contents of this directory".
written a trailing `/` - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.

View file

@ -1,13 +1,13 @@
---
title: "rclone copyto"
description: "Copy files from source to dest, skipping already copied"
description: "Copy files from source to dest, skipping already copied."
slug: rclone_copyto
url: /commands/rclone_copyto/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyto/ and as part of making a release run "make commanddocs"
---
# rclone copyto
Copy files from source to dest, skipping already copied
Copy files from source to dest, skipping already copied.
## Synopsis

View file

@ -40,7 +40,7 @@ the source match the files in the destination, not the other way
around. This means that extra files in the destination that are not in
the source will not be detected.
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--src-only`
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match`
and `--error` flags write paths, one per line, to the file name (or
stdout if it is `-`) supplied. What they write is described in the
help below. For example `--differ` will write all paths which are

View file

@ -23,6 +23,9 @@ use it like this
rclone cryptdecode --reverse encryptedremote: filename1 filename2
Another way to accomplish this is by using the `rclone backend encode` (or `decode`)command.
See the documentation on the `crypt` overlay for more info.
```
rclone cryptdecode encryptedremote: encryptedfilename [flags]

View file

@ -15,28 +15,37 @@ Interactively find duplicate filenames and delete/rename them.
By default `dedupe` interactively finds files with duplicate
names and offers to delete all but one or rename them to be
different.
different. This is known as deduping by name.
This is only useful with backends like Google Drive which can have
duplicate file names. It can be run on wrapping backends (eg crypt) if
they wrap a backend which supports duplicate file names.
Deduping by name is only useful with backends like Google Drive which
can have duplicate file names. It can be run on wrapping backends
(e.g. crypt) if they wrap a backend which supports duplicate file
names.
In the first pass it will merge directories with the same name. It
will do this iteratively until all the identically named directories
have been merged.
However if --by-hash is passed in then dedupe will find files with
duplicate hashes instead which will work on any backend which supports
at least one hash. This can be used to find files with duplicate
content. This is known as deduping by hash.
In the second pass, for every group of duplicate file names, it will
delete all but one identical files it finds without confirmation.
This means that for most duplicated files the `dedupe`
command will not be interactive.
If deduping by name, first rclone will merge directories with the same
name. It will do this iteratively until all the identically named
directories have been merged.
Next, if deduping by name, for every group of duplicate file names /
hashes, it will delete all but one identical files it finds without
confirmation. This means that for most duplicated files the `dedupe` command will not be interactive.
`dedupe` considers files to be identical if they have the
same hash. If the backend does not support hashes (eg crypt wrapping
same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping
Google Drive) then they will never be found to be identical. If you
use the `--size-only` flag then files will be considered
identical if they have the same size (any hash will be ignored). This
can be useful on crypt backends which do not support hashes.
Next rclone will resolve the remaining duplicates. Exactly which
action is taken depends on the dedupe mode. By default rclone will
interactively query the user for each one.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
@ -68,7 +77,7 @@ Now the `dedupe` session
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 files with duplicates names
two.txt: Found 3 files with duplicate names
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
@ -99,6 +108,7 @@ Dedupe can be run non interactively using the `--dedupe-mode` flag or by using a
* `--dedupe-mode largest` - removes identical files then keeps the largest one.
* `--dedupe-mode smallest` - removes identical files then keeps the smallest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
* `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing.
For example to rename all the identically named photos in your Google Photos directory, do
@ -116,6 +126,7 @@ rclone dedupe [mode] remote:path [flags]
## Options
```
--by-hash Find indentical hashes rather than names
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename. (default "interactive")
-h, --help help for dedupe
```

View file

@ -1,13 +1,13 @@
---
title: "rclone delete"
description: "Remove the contents of path."
description: "Remove the files in path."
slug: rclone_delete
url: /commands/rclone_delete/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/delete/ and as part of making a release run "make commanddocs"
---
# rclone delete
Remove the contents of path.
Remove the files in path.
## Synopsis
@ -15,20 +15,21 @@ Remove the contents of path.
Remove the files in path. Unlike `purge` it obeys include/exclude
filters so can be used to selectively delete files.
`rclone delete` only deletes objects but leaves the directory structure
`rclone delete` only deletes files but leaves the directory structure
alone. If you want to delete a directory and all of its contents use
`rclone purge`
the `purge` command.
If you supply the --rmdirs flag, it will remove all empty directories along with it.
If you supply the `--rmdirs` flag, it will remove all empty directories along with it.
You can also use the separate command `rmdir` or `rmdirs` to
delete empty directories only.
Eg delete all files bigger than 100MBytes
Check what would be deleted first (use either)
For example, to delete all files bigger than 100MBytes, you may first want to check what
would be deleted (use either):
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
Then delete
Then proceed with the actual delete:
rclone --min-size 100M delete remote:path

View file

@ -15,7 +15,7 @@ Output bash completion script for rclone.
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
probably need to be run with sudo or as root, eg
probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete bash
@ -27,7 +27,8 @@ them directly
If you supply a command line argument the script will be written
there.
If output_file is `-`, then the output will be written to stdout.
If output_file is "-", then the output will be written to stdout.
```
rclone genautocomplete bash [output_file] [flags]

View file

@ -15,7 +15,7 @@ Output fish completion script for rclone.
Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will
probably need to be run with sudo or as root, eg
probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete fish
@ -27,7 +27,8 @@ them directly
If you supply a command line argument the script will be written
there.
If output_file is `-`, then the output will be written to stdout.
If output_file is "-", then the output will be written to stdout.
```
rclone genautocomplete fish [output_file] [flags]

View file

@ -15,7 +15,7 @@ Output zsh completion script for rclone.
Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
probably need to be run with sudo or as root, eg
probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete zsh
@ -27,7 +27,8 @@ them directly
If you supply a command line argument the script will be written
there.
If output_file is `-`, then the output will be written to stdout.
If output_file is "-", then the output will be written to stdout.
```
rclone genautocomplete zsh [output_file] [flags]

View file

@ -38,12 +38,12 @@ There are several related list commands
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
remotes which can't have empty directories (eg s3, swift, gcs, etc -
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).

View file

@ -48,12 +48,12 @@ There are several related list commands
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
remotes which can't have empty directories (eg s3, swift, gcs, etc -
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).

View file

@ -1,13 +1,13 @@
---
title: "rclone lsf"
description: "List directories and objects in remote:path formatted for parsing"
description: "List directories and objects in remote:path formatted for parsing."
slug: rclone_lsf
url: /commands/rclone_lsf/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsf/ and as part of making a release run "make commanddocs"
---
# rclone lsf
List directories and objects in remote:path formatted for parsing
List directories and objects in remote:path formatted for parsing.
## Synopsis
@ -38,7 +38,7 @@ output:
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, eg "Hot" or "Cool"
T - tier of storage if known, e.g. "Hot" or "Cool"
So if you wanted the path, size and modification time, you would use
--format "pst", or maybe --format "tsp" to put the path last.
@ -121,12 +121,12 @@ There are several related list commands
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
remotes which can't have empty directories (eg s3, swift, gcs, etc -
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).

View file

@ -41,11 +41,11 @@ may be repeated). If --hash-type is set then it implies --hash.
If --no-modtime is specified then ModTime will be blank. This can
speed things up on remotes where reading the ModTime takes an extra
request (eg s3, swift).
request (e.g. s3, swift).
If --no-mimetype is specified then MimeType will be blank. This can
speed things up on remotes where reading the MimeType takes an extra
request (eg s3, swift).
request (e.g. s3, swift).
If --encrypted is not specified the Encrypted won't be emitted.
@ -67,9 +67,9 @@ If the directory is a bucket in a bucket based backend, then
The time is in RFC3339 format with up to nanosecond precision. The
number of decimal digits in the seconds will depend on the precision
that the remote can hold the times, so if times are accurate to the
nearest millisecond (eg Google Drive) then 3 digits will always be
nearest millisecond (e.g. Google Drive) then 3 digits will always be
shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are
accurate to the nearest second (Dropbox, Box, WebDav etc) no digits
accurate to the nearest second (Dropbox, Box, WebDav, etc.) no digits
will be shown ("2017-05-31T16:15:57+01:00").
The whole output can be processed as a JSON blob, or alternatively it
@ -89,12 +89,12 @@ There are several related list commands
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
remotes which can't have empty directories (eg s3, swift, gcs, etc -
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).

View file

@ -38,12 +38,12 @@ There are several related list commands
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
remotes which can't have empty directories (eg s3, swift, gcs, etc -
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).

View file

@ -9,10 +9,6 @@ url: /commands/rclone_mkdir/
Make the path if it doesn't already exist.
## Synopsis
Make the path if it doesn't already exist.
```
rclone mkdir remote:path [flags]
```

View file

@ -18,37 +18,51 @@ FUSE.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
You can either run mount in foreground mode or background (daemon) mode. Mount runs in
foreground mode by default, use the --daemon flag to specify background mode behaviour.
Background mode is only supported on Linux and OSX, you can only run mount in
foreground mode on Windows.
On Linux and OSX, you can either run mount in foreground mode or background (daemon) mode.
Mount runs in foreground mode by default, use the `--daemon` flag to specify background mode.
You can only run mount in foreground mode on Windows.
On Linux/macOS/FreeBSD Start the mount like this where `/path/to/local/mount`
is an **empty** **existing** directory.
On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount`
is an **empty** **existing** directory:
rclone mount remote:path/to/files /path/to/local/mount
Or on Windows like this where `X:` is an unused drive letter
or use a path to **non-existent** directory.
On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows)
for details. The following examples will mount to an automatically assigned drive,
to specific drive letter `X:`, to path `C:\path\to\nonexistent\directory`
(which must be **non-existent** subdirectory of an **existing** parent directory or drive,
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and
the last example will mount as network share `\\cloud\remote` and map it to an
automatically assigned drive:
rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\to\nonexistent\directory
When running in background mode the user will have to stop the mount manually (specified below).
rclone mount remote:path/to/files \\cloud\remote
When the program ends while in foreground mode, either via Ctrl+C or receiving
a SIGINT or SIGTERM signal, the mount is automatically stopped.
a SIGINT or SIGTERM signal, the mount should be automatically stopped.
The umount operation can fail, for example when the mountpoint is busy.
When that happens, it is the user's responsibility to stop the mount manually.
Stopping the mount manually:
When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
# OS X
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
When that happens, it is the user's responsibility to stop the mount manually.
The size of the mounted file system will be set according to information retrieved
from the remote, the same as returned by the [rclone about](https://rclone.org/commands/rclone_about/)
command. Remotes with unlimited storage may report the used size only,
then an additional 1PB of free space is assumed. If the remote does not
[support](https://rclone.org/overview/#optional-features) the about feature
at all, then 1PB is set as both the total and the free size.
**Note**: As of `rclone` 1.52.2, `rclone mount` now requires Go version 1.13
or newer on some platforms depending on the underlying FUSE library in use.
## Installing on Windows
To run rclone mount on Windows, you will need to
@ -57,10 +71,110 @@ download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/billziss-gh/winfsp) is an open source
Windows File System Proxy which makes it easy to write user space file
systems for Windows. It provides a FUSE emulation layer which rclone
uses in combination with
[cgofuse](https://github.com/billziss-gh/cgofuse). Both of these
packages are by Bill Zissimopoulos who was very helpful during the
implementation of rclone mount for Windows.
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
Both of these packages are by Bill Zissimopoulos who was very helpful
during the implementation of rclone mount for Windows.
### Mounting modes on windows
Unlike other operating systems, Microsoft Windows provides a different filesystem
type for network and fixed drives. It optimises access on the assumption fixed
disk drives are fast and reliable, while network drives have relatively high latency
and less reliability. Some settings can also be differentiated between the two types,
for example that Windows Explorer should just display icons and not create preview
thumbnails for image and video files on network drives.
In most cases, rclone will mount the remote as a normal, fixed disk drive by default.
However, you can also choose to mount it as a remote network drive, often described
as a network share. If you mount an rclone remote using the default, fixed drive mode
and experience unexpected program errors, freezes or other issues, consider mounting
as a network drive instead.
When mounting as a fixed disk drive you can either mount to an unused drive letter,
or to a path - which must be **non-existent** subdirectory of an **existing** parent
directory or drive. Using the special value `*` will tell rclone to
automatically assign the next available drive letter, starting with Z: and moving backward.
Examples:
rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\to\nonexistent\directory
rclone mount remote:path/to/files X:
Option `--volname` can be used to set a custom volume name for the mounted
file system. The default is to use the remote name and path.
To mount as network drive, you can add option `--network-mode`
to your mount command. Mounting to a directory path is not supported in
this mode, it is a limitation Windows imposes on junctions, so the remote must always
be mounted to a drive letter.
rclone mount remote:path/to/files X: --network-mode
A volume name specified with `--volname` will be used to create the network share path.
A complete UNC path, such as `\\cloud\remote`, optionally with path
`\\cloud\remote\madeup\path`, will be used as is. Any other
string will be used as the share part, after a default prefix `\\server\`.
If no volume name is specified then `\\server\share` will be used.
You must make sure the volume name is unique when you are mounting more than one drive,
or else the mount command will fail. The share name will treated as the volume label for
the mapped drive, shown in Windows Explorer etc, while the complete
`\\server\share` will be reported as the remote UNC path by
`net use` etc, just like a normal network drive mapping.
If you specify a full network share UNC path with `--volname`, this will implicitely
set the `--network-mode` option, so the following two examples have same result:
rclone mount remote:path/to/files X: --network-mode
rclone mount remote:path/to/files X: --volname \\server\share
You may also specify the network share UNC path as the mountpoint itself. Then rclone
will automatically assign a drive letter, same as with `*` and use that as
mountpoint, and instead use the UNC path specified as the volume name, as if it were
specified with the `--volname` option. This will also implicitely set
the `--network-mode` option. This means the following two examples have same result:
rclone mount remote:path/to/files \\cloud\remote
rclone mount remote:path/to/files * --volname \\cloud\remote
There is yet another way to enable network mode, and to set the share path,
and that is to pass the "native" libfuse/WinFsp option directly:
`--fuse-flag --VolumePrefix=\server\share`. Note that the path
must be with just a single backslash prefix in this case.
*Note:* In previous versions of rclone this was the only supported method.
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
See also [Limitations](#limitations) section below.
### Windows filesystem permissions
The FUSE emulation layer on Windows must convert between the POSIX-based
permission model used in FUSE, and the permission model used in Windows,
based on access-control lists (ACL).
The mounted filesystem will normally get three entries in its access-control list (ACL),
representing permissions for the POSIX permission scopes: Owner, group and others.
By default, the owner and group will be taken from the current user, and the built-in
group "Everyone" will be used to represent others. The user/group can be customized
with FUSE options "UserName" and "GroupName",
e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`.
The permissions on each entry will be set according to
[options](#options) `--dir-perms` and `--file-perms`,
which takes a value in traditional [numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation),
where the default corresponds to `--file-perms 0666 --dir-perms 0777`.
Note that the mapping of permissions is not always trivial, and the result
you see in Windows Explorer may not be exactly like you expected.
For example, when setting a value that includes write access, this will be
mapped to individual permissions "write attributes", "write data" and "append data",
but not "write extended attributes" (WinFsp does not support extended attributes,
see [this](https://github.com/billziss-gh/winfsp/wiki/NTFS-Compatibility)).
Windows will then show this as basic permission "Special" instead of "Write",
because "Write" includes the "write extended attributes" permission.
### Windows caveats
@ -78,43 +192,15 @@ infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Archit
which creates drives accessible for everyone on the system or
alternatively using [the nssm service manager](https://nssm.cc/usage).
### Mount as a network drive
By default, rclone will mount the remote as a normal drive. However,
you can also mount it as a **Network Drive** (or **Network Share**, as
mentioned in some places)
Unlike other systems, Windows provides a different filesystem type for
network drives. Windows and other programs treat the network drives
and fixed/removable drives differently: In network drives, many I/O
operations are optimized, as the high latency and low reliability
(compared to a normal drive) of a network is expected.
Although many people prefer network shares to be mounted as normal
system drives, this might cause some issues, such as programs not
working as expected or freezes and errors while operating with the
mounted remote in Windows Explorer. If you experience any of those,
consider mounting rclone remotes as network shares, as Windows expects
normal drives to be fast and reliable, while cloud storage is far from
that. See also [Limitations](#limitations) section below for more
info
Add "--fuse-flag --VolumePrefix=\server\share" to your "mount"
command, **replacing "share" with any other name of your choice if you
are mounting more than one remote**. Otherwise, the mountpoints will
conflict and your mounted filesystems will overlap.
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
## Limitations
Without the use of "--vfs-cache-mode" this can only write files
Without the use of `--vfs-cache-mode` this can only write files
sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without
"--vfs-cache-mode writes" or "--vfs-cache-mode full". See the [File
Caching](#vfs-file-caching) section for more info.
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [File Caching](#file-caching) section for more info.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.
@ -127,15 +213,15 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
uploads. Look at the [file caching](#vfs-file-caching)
uploads. Look at the [file caching](#file-caching)
for solutions to make mount more reliable.
## Attribute caching
You can use the flag --attr-timeout to set the time the kernel caches
the attributes (size, modification time etc) for directory entries.
You can use the flag `--attr-timeout` to set the time the kernel caches
the attributes (size, modification time, etc.) for directory entries.
The default is "1s" which caches files just long enough to avoid
The default is `1s` which caches files just long enough to avoid
too many callbacks to rclone from the kernel.
In theory 0s should be the correct value for filesystems which can
@ -146,14 +232,14 @@ few problems such as
and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147).
The kernel can cache the info about a file for the time given by
"--attr-timeout". You may see corruption if the remote file changes
`--attr-timeout`. You may see corruption if the remote file changes
length during this window. It will show up as either a truncated file
or a file with garbage on the end. With "--attr-timeout 1s" this is
very unlikely but not impossible. The higher you set "--attr-timeout"
or a file with garbage on the end. With `--attr-timeout 1s` this is
very unlikely but not impossible. The higher you set `--attr-timeout`
the more likely it is. The default setting of "1s" is the lowest
setting which mitigates the problems above.
If you set it higher ('10s' or '1m' say) then the kernel will call
If you set it higher (`10s` or `1m` say) then the kernel will call
back to rclone less often making it more efficient, however there is
more chance of the corruption issue above.
@ -164,7 +250,7 @@ This is the same as setting the attr_timeout option in mount.fuse.
## Filters
Rclone's filters can be used to select a subset of the
Note that all the rclone filters can be used to select a subset of the
files to be visible in the mount.
## systemd
@ -175,28 +261,25 @@ after the mountpoint has been successfully set up.
Units having the rclone mount service specified as a requirement
will see all files and folders immediately in this mode.
## chunked reading ###
## chunked reading
--vfs-read-chunk-size will enable reading the source objects in parts.
`--vfs-read-chunk-size` will enable reading the source objects in parts.
This can reduce the used download quota for some remotes by requesting only chunks
from the remote that are actually read at the cost of an increased number of requests.
When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size,
the chunk size for each open file will get doubled for each chunk read, until the
specified value is reached. A value of -1 will disable the limit and the chunk size will
grow indefinitely.
When `--vfs-read-chunk-size-limit` is also specified and greater than
`--vfs-read-chunk-size`, the chunk size for each open file will get doubled
for each chunk read, until the specified value is reached. A value of `-1` will disable
the limit and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following
parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When --vfs-read-chunk-size-limit 500M is specified, the result would be
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Chunked reading will only work with --vfs-cache-mode < full, as the file will always
be copied to the vfs cache before opening with --vfs-cache-mode full.
## VFS - Virtual File System
Mount uses rclone's VFS layer. This adapts the cloud storage objects
This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk
filing system.
@ -290,9 +373,9 @@ second. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using --vfs-cache-max-size note that the cache may exceed this size
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
### --vfs-cache-mode off
@ -340,7 +423,7 @@ In this mode all reads and writes are buffered to and from disk. When
data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone
will keep track of which bits of the files it has dowloaded.
will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone
will only buffer the start of the file. These files will appear to be
@ -357,6 +440,11 @@ whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
@ -392,6 +480,12 @@ on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@ -405,7 +499,7 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
The "--vfs-case-insensitive" mount flag controls how rclone handles these
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
command line), rclone may perform a "fixup" as explained below.
@ -435,30 +529,33 @@ rclone mount remote:path /path/to/mountpoint [flags]
## Options
```
--allow-non-empty Allow mounting over a non-empty directory (not Windows).
--allow-other Allow access to other users.
--allow-root Allow access to root user.
--async-read Use asynchronous reads. (default true)
--allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
--allow-other Allow access to other users. Not supported on Windows.
--allow-root Allow access to root user. Not supported on Windows.
--async-read Use asynchronous reads. Not supported on Windows. (default true)
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--daemon Run mount as a daemon (background mode). Not supported on Windows.
--daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--default-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. (default 1000)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128k)
--network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on OSX only. (default true)
--noapplexattr Ignore all "com.apple.*" extended attributes. Supported on OSX only.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 1000)
--umask int Override the permission bits set by the filesystem.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
@ -470,8 +567,8 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
--volname string Set the volume name. Supported on Windows and OSX only.
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
```
See the [global flags page](/flags/) for global options not listed here.

View file

@ -14,15 +14,15 @@ Move files from source to dest.
Moves the contents of the source directory to the destination
directory. Rclone will error if the source and destination overlap and
the remote does not support a server side directory move operation.
the remote does not support a server-side directory move operation.
If no filters are in use and if possible this will server side move
If no filters are in use and if possible this will server-side move
`source:path` into `dest:path`. After this `source:path` will no
longer exist.
Otherwise for each file in `source:path` selected by the filters (if
any) this will move it into `dest:path`. If possible a server side
move will be used, otherwise it will copy it (server side if possible)
any) this will move it into `dest:path`. If possible a server-side
move will be used, otherwise it will copy it (server-side if possible)
into `dest:path` then delete the original (if no errors on copy) in
`source:path`.

View file

@ -30,9 +30,10 @@ Here are the keys - press '?' to toggle the help on and off
←,h to return
c toggle counts
g toggle graph
n,s,C sort by name,size,count
a toggle average size in directory
n,s,C,A sort by name,size,count,average size
d delete file/directory
y copy current path to clipbard
y copy current path to clipboard
Y display current path
^L refresh screen
? to toggle help on and off

View file

@ -1,13 +1,13 @@
---
title: "rclone obscure"
description: "Obscure password for use in the rclone config file"
description: "Obscure password for use in the rclone config file."
slug: rclone_obscure
url: /commands/rclone_obscure/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/obscure/ and as part of making a release run "make commanddocs"
---
# rclone obscure
Obscure password for use in the rclone config file
Obscure password for use in the rclone config file.
## Synopsis
@ -23,7 +23,8 @@ the config file. However it is very hard to shoulder surf a 64
character hex token.
This command can also accept a password through STDIN instead of an
argument by passing a hyphen as an argument. Example:
argument by passing a hyphen as an argument. This will use the first
line of STDIN as the password not including the trailing newline.
echo "secretpassword" | rclone obscure -

View file

@ -13,8 +13,9 @@ Remove the path and all of its contents.
Remove the path and all of its contents. Note that this does not obey
include/exclude filters - everything will be removed. Use `delete` if
you want to selectively delete files.
include/exclude filters - everything will be removed. Use the `delete`
command if you want to selectively delete files. To delete empty directories only,
use command `rmdir` or `rmdirs`.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.

View file

@ -56,7 +56,7 @@ Will place this in the "arg" value
Use --loopback to connect to the rclone instance running "rclone rc".
This is very useful for testing commands without having to run an
rclone rc server, eg:
rclone rc server, e.g.:
rclone rc --loopback operations/about fs=/
@ -73,7 +73,7 @@ rclone rc commands parameter [flags]
-h, --help help for rc
--json string Input JSON - use instead of key=value args.
--loopback If set connect to this rclone instance not via HTTP.
--no-output If set don't output the JSON result.
--no-output If set, don't output the JSON result.
-o, --opt stringArray Option in the form name=value or name placed in the "opt" array.
--pass string Password to use to connect to rclone remote control.
--url string URL to connect to rclone remote control. (default "http://localhost:5572/")

View file

@ -1,19 +1,24 @@
---
title: "rclone rmdir"
description: "Remove the path if empty."
description: "Remove the empty directory at path."
slug: rclone_rmdir
url: /commands/rclone_rmdir/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/rmdir/ and as part of making a release run "make commanddocs"
---
# rclone rmdir
Remove the path if empty.
Remove the empty directory at path.
## Synopsis
Remove the path. Note that you can't remove a path with
objects in it, use purge for that.
This removes empty directory given by path. Will not remove the path if it
has any objects in it, not even empty subdirectories. Use
command `rmdirs` (or `delete` with option `--rmdirs`)
to do that.
To delete a path and any objects in it, use `purge` command.
```
rclone rmdir remote:path [flags]

View file

@ -11,15 +11,21 @@ Remove empty directories under the path.
## Synopsis
This removes any empty directories (or directories that only contain
empty directories) under the path that it finds, including the path if
it has nothing in.
If you supply the --leave-root flag, it will not remove the root directory.
This recursively removes any empty directories (including directories
that only contain empty directories), that it finds under the path.
The root path itself will also be removed if it is empty, unless
you supply the `--leave-root` flag.
Use command `rmdir` to delete just the empty directory
given by path, not recurse.
This is useful for tidying up remotes that rclone has left a lot of
empty directories in.
empty directories in. For example the `delete` command will
delete files but leave the directory structure (unless used with
option `--rmdirs`).
To delete a path and any objects in it, use `purge` command.
```

View file

@ -12,7 +12,7 @@ Serve a remote over a protocol.
## Synopsis
rclone serve is used to serve a remote over a given protocol. This
command requires the use of a subcommand to specify the protocol, eg
command requires the use of a subcommand to specify the protocol, e.g.
rclone serve http remote:

View file

@ -24,7 +24,7 @@ players might show files that they are not able to play back correctly.
## Server options
Use `--addr` to specify which IP address and port the server should
listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all
listen on, e.g. `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all
IPs.
Use `--name` to choose the friendly server name, which is by
@ -129,9 +129,9 @@ second. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using --vfs-cache-max-size note that the cache may exceed this size
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
### --vfs-cache-mode off
@ -179,7 +179,7 @@ In this mode all reads and writes are buffered to and from disk. When
data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone
will keep track of which bits of the files it has dowloaded.
will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone
will only buffer the start of the file. These files will appear to be
@ -196,6 +196,11 @@ whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
@ -231,6 +236,12 @@ on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@ -244,7 +255,7 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
The "--vfs-case-insensitive" mount flag controls how rclone handles these
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
command line), rclone may perform a "fixup" as explained below.
@ -278,7 +289,7 @@ rclone serve dlna remote:path [flags]
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. (default 1000)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for dlna
--log-trace enable trace logging of SOAP traffic
--name string name of DLNA server
@ -287,8 +298,8 @@ rclone serve dlna remote:path [flags]
--no-seek Don't allow seeking in files.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 1000)
--umask int Override the permission bits set by the filesystem. (default 2)
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)

View file

@ -19,7 +19,7 @@ or you can make a remote of type ftp to read and write it.
## Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
@ -128,9 +128,9 @@ second. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using --vfs-cache-max-size note that the cache may exceed this size
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
### --vfs-cache-mode off
@ -178,7 +178,7 @@ In this mode all reads and writes are buffered to and from disk. When
data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone
will keep track of which bits of the files it has dowloaded.
will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone
will only buffer the start of the file. These files will appear to be
@ -195,6 +195,11 @@ whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
@ -230,6 +235,12 @@ on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@ -243,7 +254,7 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
The "--vfs-case-insensitive" mount flag controls how rclone handles these
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
command line), rclone may perform a "fixup" as explained below.
@ -270,14 +281,14 @@ otherwise. If the flag is provided without a value, then it is "true".
If you supply the parameter `--auth-proxy /path/to/program` then
rclone will use that program to generate backends on the fly which
then are used to authenticate incoming requests. This uses a simple
JSON based protocl with input on STDIN and output on STDOUT.
JSON based protocol with input on STDIN and output on STDOUT.
**PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used
together, if `--auth-proxy` is set the authorized keys option will be
ignored.
There is an example program
[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py)
[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py)
in the rclone source code.
The program's job is to take a `user` and `pass` on the input and turn
@ -356,11 +367,13 @@ rclone serve ftp remote:path [flags]
```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
--auth-proxy string A program to use to create the backend from the auth.
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. (default 1000)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
@ -369,8 +382,8 @@ rclone serve ftp remote:path [flags]
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--public-ip string Public IP address to advertise for passive connections.
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 1000)
--umask int Override the permission bits set by the filesystem. (default 2)
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication. (default "anonymous")
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)

View file

@ -15,7 +15,7 @@ rclone serve http implements a basic web server to serve the remote
over HTTP. This can be viewed in a web browser or you can make a
remote of type http read from it.
You can use the filter flags (eg --include, --exclude) to control what
You can use the filter flags (e.g. --include, --exclude) to control what
is served.
The server will log errors. Use -v to see access logs.
@ -26,7 +26,7 @@ control the stats printing.
## Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
@ -57,7 +57,7 @@ to be used within the template to server pages:
| .Name | The full path of a file/directory. |
| .Title | Directory listing of .Name |
| .Sort | The current sort used. This is changeable via ?sort= parameter |
| | Sort Options: namedirfist,name,size,time (default namedirfirst) |
| | Sort Options: namedirfirst,name,size,time (default namedirfirst) |
| .Order | The current ordering used. This is changeable via ?order= parameter |
| | Order Options: asc,desc (default asc) |
| .Query | Currently unused. |
@ -200,9 +200,9 @@ second. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using --vfs-cache-max-size note that the cache may exceed this size
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
### --vfs-cache-mode off
@ -250,7 +250,7 @@ In this mode all reads and writes are buffered to and from disk. When
data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone
will keep track of which bits of the files it has dowloaded.
will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone
will only buffer the start of the file. These files will appear to be
@ -267,6 +267,11 @@ whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
@ -302,6 +307,12 @@ on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@ -315,7 +326,7 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
The "--vfs-case-insensitive" mount flag controls how rclone handles these
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
command line), rclone may perform a "fixup" as explained below.
@ -352,7 +363,7 @@ rclone serve http remote:path [flags]
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. (default 1000)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for http
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
@ -367,8 +378,8 @@ rclone serve http remote:path [flags]
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
--uid uint32 Override the uid field set by the filesystem. (default 1000)
--umask int Override the permission bits set by the filesystem. (default 2)
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)

View file

@ -44,6 +44,10 @@ with use of the "--addr" flag.
You might wish to start this server on boot.
Adding --cache-objects=false will cause rclone to stop caching objects
returned from the List call. Caching is normally desirable as it speeds
up downloading objects, saves transactions and uses very little memory.
## Setting up restic to use rclone ###
Now you can [follow the restic
@ -92,7 +96,7 @@ with a path of `/<username>/`.
## Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
@ -123,7 +127,7 @@ to be used within the template to server pages:
| .Name | The full path of a file/directory. |
| .Title | Directory listing of .Name |
| .Sort | The current sort used. This is changeable via ?sort= parameter |
| | Sort Options: namedirfist,name,size,time (default namedirfirst) |
| | Sort Options: namedirfirst,name,size,time (default namedirfirst) |
| .Order | The current ordering used. This is changeable via ?order= parameter |
| | Order Options: asc,desc (default asc) |
| .Query | Currently unused. |
@ -181,6 +185,7 @@ rclone serve restic remote:path [flags]
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--append-only disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root.
--cache-objects cache listed objects (default true)
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
-h, --help help for restic

View file

@ -15,7 +15,7 @@ rclone serve sftp implements an SFTP server to serve the remote
over SFTP. This can be used with an SFTP client or you can make a
remote of type sftp to use with it.
You can use the filter flags (eg --include, --exclude) to control what
You can use the filter flags (e.g. --include, --exclude) to control what
is served.
The server will log errors. Use -v to see access logs.
@ -139,9 +139,9 @@ second. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using --vfs-cache-max-size note that the cache may exceed this size
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
### --vfs-cache-mode off
@ -189,7 +189,7 @@ In this mode all reads and writes are buffered to and from disk. When
data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone
will keep track of which bits of the files it has dowloaded.
will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone
will only buffer the start of the file. These files will appear to be
@ -206,6 +206,11 @@ whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
@ -241,6 +246,12 @@ on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@ -254,7 +265,7 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
The "--vfs-case-insensitive" mount flag controls how rclone handles these
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
command line), rclone may perform a "fixup" as explained below.
@ -281,14 +292,14 @@ otherwise. If the flag is provided without a value, then it is "true".
If you supply the parameter `--auth-proxy /path/to/program` then
rclone will use that program to generate backends on the fly which
then are used to authenticate incoming requests. This uses a simple
JSON based protocl with input on STDIN and output on STDOUT.
JSON based protocol with input on STDIN and output on STDOUT.
**PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used
together, if `--auth-proxy` is set the authorized keys option will be
ignored.
There is an example program
[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py)
[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py)
in the rclone source code.
The program's job is to take a `user` and `pass` on the input and turn
@ -371,7 +382,7 @@ rclone serve sftp remote:path [flags]
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. (default 1000)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
--no-auth Allow connections with no authentication if set.
@ -381,8 +392,8 @@ rclone serve sftp remote:path [flags]
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 1000)
--umask int Override the permission bits set by the filesystem. (default 2)
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)

View file

@ -34,7 +34,7 @@ Use "rclone hashsum" to see the full list.
## Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
@ -65,7 +65,7 @@ to be used within the template to server pages:
| .Name | The full path of a file/directory. |
| .Title | Directory listing of .Name |
| .Sort | The current sort used. This is changeable via ?sort= parameter |
| | Sort Options: namedirfist,name,size,time (default namedirfirst) |
| | Sort Options: namedirfirst,name,size,time (default namedirfirst) |
| .Order | The current ordering used. This is changeable via ?order= parameter |
| | Order Options: asc,desc (default asc) |
| .Query | Currently unused. |
@ -208,9 +208,9 @@ second. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using --vfs-cache-max-size note that the cache may exceed this size
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
### --vfs-cache-mode off
@ -258,7 +258,7 @@ In this mode all reads and writes are buffered to and from disk. When
data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone
will keep track of which bits of the files it has dowloaded.
will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone
will only buffer the start of the file. These files will appear to be
@ -275,6 +275,11 @@ whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
@ -310,6 +315,12 @@ on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only
@ -323,7 +334,7 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
The "--vfs-case-insensitive" mount flag controls how rclone handles these
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
command line), rclone may perform a "fixup" as explained below.
@ -350,7 +361,7 @@ otherwise. If the flag is provided without a value, then it is "true".
If you supply the parameter `--auth-proxy /path/to/program` then
rclone will use that program to generate backends on the fly which
then are used to authenticate incoming requests. This uses a simple
JSON based protocl with input on STDIN and output on STDOUT.
JSON based protocol with input on STDIN and output on STDOUT.
**PLEASE NOTE:** `--auth-proxy` and `--authorized-keys` cannot be used
together, if `--auth-proxy` is set the authorized keys option will be
@ -444,7 +455,7 @@ rclone serve webdav remote:path [flags]
--disable-dir-list Disable HTML directory list on GET request for a directory
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. (default 1000)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
-h, --help help for webdav
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
@ -459,8 +470,8 @@ rclone serve webdav remote:path [flags]
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
--uid uint32 Override the uid field set by the filesystem. (default 1000)
--umask int Override the permission bits set by the filesystem. (default 2)
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)

View file

@ -9,10 +9,6 @@ url: /commands/rclone_size/
Prints the total size and number of objects in remote:path.
## Synopsis
Prints the total size and number of objects in remote:path.
```
rclone size remote:path [flags]
```

View file

@ -21,9 +21,9 @@ unless the --no-create flag is provided.
If --timestamp is used then it will set the modification time to that
time instead of the current time. Times may be specified as one of:
- 'YYMMDD' - eg. 17.10.30
- 'YYYY-MM-DDTHH:MM:SS' - eg. 2006-01-02T15:04:05
- 'YYYY-MM-DDTHH:MM:SS.SSS' - eg. 2006-01-02T15:04:05.123456789
- 'YYMMDD' - e.g. 17.10.30
- 'YYYY-MM-DDTHH:MM:SS' - e.g. 2006-01-02T15:04:05
- 'YYYY-MM-DDTHH:MM:SS.SSS' - e.g. 2006-01-02T15:04:05.123456789
Note that --timestamp is in UTC if you want local time then add the
--localtime flag.

View file

@ -28,7 +28,7 @@ For example
1 directories, 5 files
You can use any of the filtering options with the tree command (eg
You can use any of the filtering options with the tree command (e.g.
--include and --exclude). You can also use --fast-list.
The tree command has many options for controlling the listing which

View file

@ -130,4 +130,18 @@ GZIP compression level (-2 to 9).
- Type: int
- Default: -1
#### --compress-ram-cache-limit
Some remotes don't allow the upload of files with unknown size.
In this case the compressed file will need to be cached to determine
it's size.
Files smaller than this limit will be cached in RAM, file larger than
this limit will be cached on disk
- Config: ram_cache_limit
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
- Type: SizeSuffix
- Default: 20M
{{< rem autogenerated options stop >}}

View file

@ -547,8 +547,10 @@ Here are the standard options specific to drive (Google Drive).
#### --drive-client-id
OAuth Client Id
Leave blank normally.
Google Application Client Id
Setting your own is recommended.
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
If you leave this blank, it will use an internal key which is low performance.
- Config: client_id
- Env Var: RCLONE_DRIVE_CLIENT_ID
@ -1007,6 +1009,25 @@ See: https://github.com/rclone/rclone/issues/3857
- Type: bool
- Default: false
#### --drive-stop-on-download-limit
Make download limit errors be fatal
At the time of writing it is only possible to download 10TB of data from
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop
the in-progress sync.
Note that this detection is relying on error message strings which
Google don't document so it may break in the future.
- Config: stop_on_download_limit
- Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT
- Type: bool
- Default: false
#### --drive-skip-shortcuts
If set skip shortcut files
@ -1171,6 +1192,33 @@ Result:
}
#### copyid
Copy files by ID
rclone backend copyid remote: [options] [<arguments>+]
This command copies files by ID
Usage:
rclone backend copyid drive: ID path
rclone backend copyid drive: ID1 path1 ID2 path2
It copies the drive file with ID given to the path (an rclone path which
will be passed internally to rclone copyto). The ID and path pairs can be
repeated.
The path should end with a / to indicate copy the file as named to
this directory. If it doesn't end with a / then the last path
component will be used as the file name.
If the destination is a drive backend then server-side copying will be
attempted if possible.
Use the -i flag to see what would be copied before copying.
{{< rem autogenerated options stop >}}
### Limitations ###

View file

@ -69,6 +69,7 @@ These flags are available for every command.
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--log-systemd Activate systemd integration for the logger.
--low-level-retries int Number of low level retries to do. (default 10)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
@ -86,6 +87,7 @@ These flags are available for every command.
--multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-check-dest Don't check the destination, copy regardless.
--no-console Hide console window. Supported on Windows only.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-unicode-normalization Don't normalize unicode characters in filenames.
@ -93,6 +95,7 @@ These flags are available for every command.
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--password-command SpaceSepList Command for supplying password for encrypted configuration.
-P, --progress Show progress during transfer.
--progress-terminal-title Show progress on the terminal title. Requires -P/--progress.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -147,7 +150,7 @@ These flags are available for every command.
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.0")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.54.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@ -168,6 +171,7 @@ and may be set in the config file.
--alias-remote string Remote or path to alias.
--azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator)
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting.
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-disable-checksum Don't store MD5 checksum with object metadata.
--azureblob-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
@ -176,9 +180,14 @@ and may be set in the config file.
--azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
--azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
--azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any. Leave blank if msi_object_id or msi_mi_res_id specified.
--azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_object_id specified.
--azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_mi_res_id specified.
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal.
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4G)
@ -229,10 +238,11 @@ and may be set in the config file.
--chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2G)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks.
--chunker-hash-type string Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5")
--chunker-meta-format string Format of the metadata object or "none". By default "simplejson". (default "simplejson")
--chunker-name-format string String format of chunk file names. (default "*.rclone_chunk.###")
--chunker-remote string Remote to chunk/unchunk.
--chunker-start-from int Minimum valid chunk number. Usually 0 or 1. (default 1)
--compress-level int GZIP compression level (-2 to 9). (default -1)
--compress-mode string Compression mode. (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size. (default 20M)
--compress-remote string Remote to compress.
-L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
@ -246,7 +256,7 @@ and may be set in the config file.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-auth-url string Auth server URL.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string OAuth Client Id
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-disable-http2 Disable drive using http2 (default true)
--drive-encoding MultiEncoder This sets the encoding for the backend. (default InvalidUtf8)
@ -269,6 +279,7 @@ and may be set in the config file.
--drive-skip-gdocs Skip google documents in all listings.
--drive-skip-shortcuts If set skip shortcut files
--drive-starred-only Only show files that are starred.
--drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Team Drive
--drive-token string OAuth Access Token as a JSON blob.
@ -285,20 +296,30 @@ and may be set in the config file.
--dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account.
--dropbox-shared-files Instructs rclone to work on individual shared files.
--dropbox-shared-folders Instructs rclone to work on shared folders.
--dropbox-token string OAuth Access Token as a JSON blob.
--dropbox-token-url string Token server url.
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use FTP over TLS (Explicit)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-host string FTP host to connect to
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-pass string FTP password (obscured)
--ftp-port string FTP port, leave blank to use default (21)
--ftp-tls Use FTPS over TLS (Implicit)
--ftp-tls Use Implicit FTPS (FTP over TLS)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL.
@ -317,11 +338,17 @@ and may be set in the config file.
--gphotos-auth-url string Auth server URL.
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-include-archived Also view and download archived media.
--gphotos-read-only Set to make the Google Photos backend read only.
--gphotos-read-size Set to read the size of media items.
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
--gphotos-token string OAuth Access Token as a JSON blob.
--gphotos-token-url string Token server url.
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
--hdfs-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode string hadoop name node and port
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string hadoop user name
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-head Don't use HEAD requests to find file sizes in dir listing
--http-no-slash Set this if the site doesn't end directories with /
@ -354,6 +381,7 @@ and may be set in the config file.
--local-no-sparse Disable sparse files for multi-thread downloads
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--local-nounc string Disable UNC (long path names) conversion on Windows
--local-zero-size-links Assume the Stat size of links is zero (and read them instead)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
@ -374,9 +402,13 @@ and may be set in the config file.
--onedrive-client-secret string OAuth Client Secret
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--onedrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--onedrive-link-password string Set the password for links created by the link command.
--onedrive-link-scope string Set the scope of the links created by the link command. (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command. (default "view")
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive. (default "global")
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs.
--onedrive-token string OAuth Access Token as a JSON blob.
--onedrive-token-url string Token server url.
@ -410,6 +442,7 @@ and may be set in the config file.
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656G)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
@ -421,16 +454,18 @@ and may be set in the config file.
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-requester-pays Enables requester pays option when interacting with S3 bucket.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
--s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
--s3-sse-customer-key-md5 string If using SSE-C you must provide the secret encryption key MD5 checksum.
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing new objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
@ -452,17 +487,20 @@ and may be set in the config file.
--sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. (obscured)
--sftp-key-pem string Raw PEM-encoded private key, If specified, will override key_file parameter.
--sftp-key-use-agent When set forces the usage of the ssh-agent.
--sftp-known-hosts-file string Optional path to known_hosts file.
--sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect.
--sftp-pass string SSH password, leave blank to use ssh-agent. (obscured)
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-pubkey-file string Optional path to public key file.
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host.
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect.
--sftp-skip-links Set to skip any symlinks and any other non regular files.
--sftp-subsystem string Specifies the SSH2 subsystem on the remote host. (default "sftp")
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods.
--sftp-user string SSH username, leave blank for current username, ncw
--sftp-user string SSH username, leave blank for current username, $USER
--sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64M)
--sharefile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls.
@ -492,6 +530,7 @@ and may be set in the config file.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-leave-parts-on-error If true avoid calling abort upload on a failure. It should be set to true for resuming uploads across different sessions.
--swift-no-chunk Don't chunk files during streaming upload.
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
@ -523,4 +562,6 @@ and may be set in the config file.
--yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-token string OAuth Access Token as a JSON blob.
--yandex-token-url string Token server url.
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
```

View file

@ -160,9 +160,9 @@ FTP password
#### --ftp-tls
Use FTPS over TLS (Implicit)
When using implicit FTP over TLS the client will connect using TLS
right from the start, which in turn breaks the compatibility with
Use Implicit FTPS (FTP over TLS)
When using implicit FTP over TLS the client connects using TLS
right from the start which breaks compatibility with
non-TLS-aware servers. This is usually served over port 990 rather
than port 21. Cannot be used in combination with explicit FTP.
@ -173,8 +173,8 @@ than port 21. Cannot be used in combination with explicit FTP.
#### --ftp-explicit-tls
Use FTP over TLS (Explicit)
When using explicit FTP over TLS the client explicitly request
Use Explicit FTPS (FTP over TLS)
When using explicit FTP over TLS the client explicitly requests
security from the server in order to upgrade a plain text connection
to an encrypted one. Cannot be used in combination with implicit FTP.
@ -214,6 +214,15 @@ Disable using EPSV even if server advertises support
- Type: bool
- Default: false
#### --ftp-disable-mlsd
Disable using MLSD even if server advertises support
- Config: disable_mlsd
- Env Var: RCLONE_FTP_DISABLE_MLSD
- Type: bool
- Default: false
#### --ftp-encoding
This sets the encoding for the backend.

View file

@ -403,7 +403,7 @@ you want to read the media.
#### --gphotos-start-year
Year limits the photos to be downloaded to those which were uploaded after the given year
Year limits the photos to be downloaded to those which are uploaded after the given year
- Config: start_year
- Env Var: RCLONE_GPHOTOS_START_YEAR

View file

@ -190,7 +190,7 @@ Here are the advanced options specific to hdfs (Hadoop distributed file system).
Kerberos service principal name for the namenode
Enables KERBEROS authentication. Specifies the Service Principal Name
(SERVICE>/<FQDN>) for the namenode.
(<SERVICE>/<FQDN>) for the namenode.
- Config: service_principal_name
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME

View file

@ -347,6 +347,24 @@ points, as you explicitly acknowledge that they should be skipped.
- Type: bool
- Default: false
#### --local-zero-size-links
Assume the Stat size of links is zero (and read them instead)
On some virtual filesystems (such ash LucidLink), reading a link size via a Stat call always returns 0.
However, on unix it reads as the length of the text in the link. This may cause errors like this when
syncing:
Failed to copy: corrupted on transfer: sizes differ 0 vs 13
Setting this flag causes rclone to read the link and use that as the size of the link
instead of 0 which in most cases fixes the problem.
- Config: zero_size_links
- Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS
- Type: bool
- Default: false
#### --local-no-unicode-normalization
Don't apply unicode normalization to paths and filenames (Deprecated)

View file

@ -194,6 +194,7 @@ Skip full upload if there is another file with same data hash.
This feature is called "speedup" or "put by hash". It is especially efficient
in case of generally available files like popular books, video or audio clips,
because files are searched by hash in all accounts of all mailru users.
It is meaningless and ineffective if source file is unique or encrypted.
Please note that rclone may need local memory and disk space to calculate
content hash in advance and decide whether full upload is required.
Also, if rclone does not know file size in advance (e.g. in case of
@ -296,7 +297,7 @@ This option must not be used by an ordinary user. It is intended only to
facilitate remote troubleshooting of backend issues. Strict meaning of
flags is not documented and not guaranteed to persist between releases.
Quirks will be removed when the backend grows stable.
Supported quirks: atomicmkdir binlist gzip insecure retry400
Supported quirks: atomicmkdir binlist unknowndirs
- Config: quirks
- Env Var: RCLONE_MAILRU_QUIRKS

View file

@ -215,6 +215,24 @@ Leave blank normally.
- Type: string
- Default: ""
#### --onedrive-region
Choose national cloud region for OneDrive.
- Config: region
- Env Var: RCLONE_ONEDRIVE_REGION
- Type: string
- Default: "global"
- Examples:
- "global"
- Microsoft Cloud Global
- "us"
- Microsoft Cloud for US Government
- "de"
- Microsoft Cloud Germany
- "cn"
- Azure and Office 365 operated by 21Vianet in China
### Advanced Options
Here are the advanced options specific to onedrive (Microsoft OneDrive).
@ -298,10 +316,9 @@ listing, set this option.
Allow server-side operations (e.g. copy) to work across different onedrive configs.
This can be useful if you wish to do a server-side copy between two
different Onedrives. Note that this isn't enabled by default
because it isn't easy to tell if it will work between any two
configurations.
This will only work if you are copying between two OneDrive *Personal* drives AND
the files to copy are already shared between them. In other cases, rclone will
fall back to normal copy (which will be slightly slower).
- Config: server_side_across_configs
- Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS
@ -329,6 +346,48 @@ this flag there.
- Type: bool
- Default: false
#### --onedrive-link-scope
Set the scope of the links created by the link command.
- Config: link_scope
- Env Var: RCLONE_ONEDRIVE_LINK_SCOPE
- Type: string
- Default: "anonymous"
- Examples:
- "anonymous"
- Anyone with the link has access, without needing to sign in. This may include people outside of your organization. Anonymous link support may be disabled by an administrator.
- "organization"
- Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
#### --onedrive-link-type
Set the type of the links created by the link command.
- Config: link_type
- Env Var: RCLONE_ONEDRIVE_LINK_TYPE
- Type: string
- Default: "view"
- Examples:
- "view"
- Creates a read-only link to the item.
- "edit"
- Creates a read-write link to the item.
- "embed"
- Creates an embeddable link to the item.
#### --onedrive-link-password
Set the password for links created by the link command.
At the time of writing this only works with OneDrive personal paid accounts.
- Config: link_password
- Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD
- Type: string
- Default: ""
#### --onedrive-encoding
This sets the encoding for the backend.
@ -338,7 +397,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}

View file

@ -477,18 +477,30 @@ See the [config update command](/commands/rclone_config_update/) command for mor
### core/bwlimit: Set the bandwidth limit. {#core-bwlimit}
This sets the bandwidth limit to that passed in.
This sets the bandwidth limit to the string passed in. This should be
a single bandwidth limit entry or a pair of upload:download bandwidth.
Eg
rclone rc core/bwlimit rate=off
{
"bytesPerSecond": -1,
"bytesPerSecondTx": -1,
"bytesPerSecondRx": -1,
"rate": "off"
}
rclone rc core/bwlimit rate=1M
{
"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 1048576,
"rate": "1M"
}
rclone rc core/bwlimit rate=1M:100k
{
"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 131072,
"rate": "1M"
}
@ -498,6 +510,8 @@ If the rate parameter is not supplied then the bandwidth is queried
rclone rc core/bwlimit
{
"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 1048576,
"rate": "1M"
}
@ -514,17 +528,22 @@ This takes the following parameters
- command - a string with the command name
- arg - a list of arguments for the backend command
- opt - a map of string to string of options
- returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT", "STREAM_ONLY_STDERR")
- defaults to "COMBINED_OUTPUT" if not set
- the STREAM returnTypes will write the output to the body of the HTTP message
- the COMBINED_OUTPUT will write the output to the "result" parameter
Returns
- result - result from the backend command
- only set when using returnType "COMBINED_OUTPUT"
- error - set if rclone exits with an error code
- returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT". "STREAM_ONLY_STDERR")
- returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT", "STREAM_ONLY_STDERR")
For example
rclone rc core/command command=ls -a mydrive:/ -o max-depth=1
rclone rc core/command -a ls -a mydrive:/ -o max-depth=1
rclone rc core/command -a ls -a mydrive:/ -o max-depth=1
Returns

View file

@ -570,10 +570,10 @@ Choose your S3 provider.
- Scaleway Object Storage
- "StackPath"
- StackPath Object Storage
- "Wasabi"
- Wasabi Object Storage
- "TencentCOS"
- Tencent Cloud Object Storage (COS)
- "Wasabi"
- Wasabi Object Storage
- "Other"
- Any other S3 compatible provider
@ -628,12 +628,12 @@ Region to connect to.
- "us-east-2"
- US East (Ohio) Region
- Needs location constraint us-east-2.
- "us-west-2"
- US West (Oregon) Region
- Needs location constraint us-west-2.
- "us-west-1"
- US West (Northern California) Region
- Needs location constraint us-west-1.
- "us-west-2"
- US West (Oregon) Region
- Needs location constraint us-west-2.
- "ca-central-1"
- Canada (Central) Region
- Needs location constraint ca-central-1.
@ -643,9 +643,15 @@ Region to connect to.
- "eu-west-2"
- EU (London) Region
- Needs location constraint eu-west-2.
- "eu-west-3"
- EU (Paris) Region
- Needs location constraint eu-west-3.
- "eu-north-1"
- EU (Stockholm) Region
- Needs location constraint eu-north-1.
- "eu-south-1"
- EU (Milan) Region
- Needs location constraint eu-south-1.
- "eu-central-1"
- EU (Frankfurt) Region
- Needs location constraint eu-central-1.
@ -661,6 +667,9 @@ Region to connect to.
- "ap-northeast-2"
- Asia Pacific (Seoul)
- Needs location constraint ap-northeast-2.
- "ap-northeast-3"
- Asia Pacific (Osaka-Local)
- Needs location constraint ap-northeast-3.
- "ap-south-1"
- Asia Pacific (Mumbai)
- Needs location constraint ap-south-1.
@ -670,6 +679,24 @@ Region to connect to.
- "sa-east-1"
- South America (Sao Paulo) Region
- Needs location constraint sa-east-1.
- "me-south-1"
- Middle East (Bahrain) Region
- Needs location constraint me-south-1.
- "af-south-1"
- Africa (Cape Town) Region
- Needs location constraint af-south-1.
- "cn-north-1"
- China (Beijing) Region
- Needs location constraint cn-north-1.
- "cn-northwest-1"
- China (Ningxia) Region
- Needs location constraint cn-northwest-1.
- "us-gov-east-1"
- AWS GovCloud (US-East) Region
- Needs location constraint us-gov-east-1.
- "us-gov-west-1"
- AWS GovCloud (US) Region
- Needs location constraint us-gov-west-1.
#### --s3-region
@ -925,6 +952,54 @@ Endpoint for StackPath Object Storage.
#### --s3-endpoint
Endpoint for Tencent COS API.
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Type: string
- Default: ""
- Examples:
- "cos.ap-beijing.myqcloud.com"
- Beijing Region.
- "cos.ap-nanjing.myqcloud.com"
- Nanjing Region.
- "cos.ap-shanghai.myqcloud.com"
- Shanghai Region.
- "cos.ap-guangzhou.myqcloud.com"
- Guangzhou Region.
- "cos.ap-nanjing.myqcloud.com"
- Nanjing Region.
- "cos.ap-chengdu.myqcloud.com"
- Chengdu Region.
- "cos.ap-chongqing.myqcloud.com"
- Chongqing Region.
- "cos.ap-hongkong.myqcloud.com"
- Hong Kong (China) Region.
- "cos.ap-singapore.myqcloud.com"
- Singapore Region.
- "cos.ap-mumbai.myqcloud.com"
- Mumbai Region.
- "cos.ap-seoul.myqcloud.com"
- Seoul Region.
- "cos.ap-bangkok.myqcloud.com"
- Bangkok Region.
- "cos.ap-tokyo.myqcloud.com"
- Tokyo Region.
- "cos.na-siliconvalley.myqcloud.com"
- Silicon Valley Region.
- "cos.na-ashburn.myqcloud.com"
- Virginia Region.
- "cos.na-toronto.myqcloud.com"
- Toronto Region.
- "cos.eu-frankfurt.myqcloud.com"
- Frankfurt Region.
- "cos.eu-moscow.myqcloud.com"
- Moscow Region.
- "cos.accelerate.myqcloud.com"
- Use Tencent COS Accelerate Endpoint.
#### --s3-endpoint
Endpoint for S3 API.
Required when using an S3 clone.
@ -962,18 +1037,22 @@ Used when creating buckets only.
- Empty for US Region, Northern Virginia, or Pacific Northwest.
- "us-east-2"
- US East (Ohio) Region.
- "us-west-2"
- US West (Oregon) Region.
- "us-west-1"
- US West (Northern California) Region.
- "us-west-2"
- US West (Oregon) Region.
- "ca-central-1"
- Canada (Central) Region.
- "eu-west-1"
- EU (Ireland) Region.
- "eu-west-2"
- EU (London) Region.
- "eu-west-3"
- EU (Paris) Region.
- "eu-north-1"
- EU (Stockholm) Region.
- "eu-south-1"
- EU (Milan) Region.
- "EU"
- EU Region.
- "ap-southeast-1"
@ -983,13 +1062,27 @@ Used when creating buckets only.
- "ap-northeast-1"
- Asia Pacific (Tokyo) Region.
- "ap-northeast-2"
- Asia Pacific (Seoul)
- Asia Pacific (Seoul) Region.
- "ap-northeast-3"
- Asia Pacific (Osaka-Local) Region.
- "ap-south-1"
- Asia Pacific (Mumbai)
- Asia Pacific (Mumbai) Region.
- "ap-east-1"
- Asia Pacific (Hong Kong)
- Asia Pacific (Hong Kong) Region.
- "sa-east-1"
- South America (Sao Paulo) Region.
- "me-south-1"
- Middle East (Bahrain) Region.
- "af-south-1"
- Africa (Cape Town) Region.
- "cn-north-1"
- China (Beijing) Region
- "cn-northwest-1"
- China (Ningxia) Region.
- "us-gov-east-1"
- AWS GovCloud (US-East) Region.
- "us-gov-west-1"
- AWS GovCloud (US) Region.
#### --s3-location-constraint
@ -1092,6 +1185,8 @@ doesn't copy the ACL from the source but rather writes a fresh one.
- Type: string
- Default: ""
- Examples:
- "default"
- Owner gets Full_CONTROL. No one else has access rights (default).
- "private"
- Owner gets FULL_CONTROL. No one else has access rights (default).
- "public-read"
@ -1192,6 +1287,24 @@ The storage class to use when storing new objects in OSS.
#### --s3-storage-class
The storage class to use when storing new objects in Tencent COS.
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
- Type: string
- Default: ""
- Examples:
- ""
- Default
- "STANDARD"
- Standard storage class
- "ARCHIVE"
- Archive storage mode.
- "STANDARD_IA"
- Infrequent access storage mode.
#### --s3-storage-class
The storage class to use when storing new objects in S3.
- Config: storage_class
@ -1208,7 +1321,7 @@ The storage class to use when storing new objects in S3.
### Advanced Options
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS).
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS).
#### --s3-bucket-acl
@ -1234,6 +1347,15 @@ isn't set then "acl" is used instead.
- "authenticated-read"
- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
#### --s3-requester-pays
Enables requester pays option when interacting with S3 bucket.
- Config: requester_pays
- Env Var: RCLONE_S3_REQUESTER_PAYS
- Type: bool
- Default: false
#### --s3-sse-customer-algorithm
If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
@ -1262,7 +1384,10 @@ If using SSE-C you must provide the secret encryption key used to encrypt/decryp
#### --s3-sse-customer-key-md5
If using SSE-C you must provide the secret encryption key MD5 checksum.
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
- Config: sse_customer_key_md5
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5
@ -1429,7 +1554,7 @@ if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
Some providers (e.g. AWS, Aliyun OSS or Netease COS) require this set to
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to
false - rclone will do this automatically based on the provider
setting.
@ -1499,12 +1624,53 @@ If set, don't attempt to check the bucket exists or create it
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
It can also be needed if the user you are using does not have bucket
creation permissions. Before v1.52.0 this would have passed silently
due to a bug.
- Config: no_check_bucket
- Env Var: RCLONE_S3_NO_CHECK_BUCKET
- Type: bool
- Default: false
#### --s3-no-head
If set, don't HEAD uploaded objects to check integrity
This can be useful when trying to minimise the number of transactions
rclone does.
Setting it means that if rclone receives a 200 OK message after
uploading an object with PUT then it will assume that it got uploaded
properly.
In particular it will assume:
- the metadata, including modtime, storage class and content type was as uploaded
- the size was as uploaded
It reads the following items from the response for a single part PUT:
- the MD5SUM
- The uploaded date
For multipart uploads these items aren't read.
If an source object of unknown length is uploaded then rclone **will** do a
HEAD request.
Setting this flag increases the chance for undetected upload failures,
in particular an incorrect size, so it isn't recommended for normal
operation. In practice the chance of an undetected upload failure is
very small even with this flag.
- Config: no_head
- Env Var: RCLONE_S3_NO_HEAD
- Type: bool
- Default: false
#### --s3-encoding
This sets the encoding for the backend.
@ -1536,6 +1702,23 @@ Whether to use mmap buffers in internal memory pool.
- Type: bool
- Default: false
#### --s3-disable-http2
Disable usage of http2 for S3 backends
There is currently an unsolved issue with the s3 (specifically minio) backend
and HTTP/2. HTTP/2 is enabled by default for the s3 backend but can be
disabled here. When the issue is solved this flag will be removed.
See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rclone/issues/3631
- Config: disable_http2
- Env Var: RCLONE_S3_DISABLE_HTTP2
- Type: bool
- Default: false
### Backend commands
Here are the commands specific to the s3 backend.

View file

@ -478,6 +478,24 @@ The subsystem option is ignored when server_command is defined.
- Type: string
- Default: ""
#### --sftp-use-fstat
If set use fstat instead of stat
Some servers limit the amount of open files and calling Stat after opening
the file will throw an error from the server. Setting this flag will call
Fstat instead of Stat which is called on an already open file handle.
It has been found that this helps with IBM Sterling SFTP servers which have
"extractability" level set to 1 which means only 1 file can be opened at
any given time.
- Config: use_fstat
- Env Var: RCLONE_SFTP_USE_FSTAT
- Type: bool
- Default: false
{{< rem autogenerated options stop >}}
### Limitations ###

View file

@ -430,6 +430,15 @@ provider.
Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
#### --swift-leave-parts-on-error
If true avoid calling abort upload on a failure. It should be set to true for resuming uploads across different sessions.
- Config: leave_parts_on_error
- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR
- Type: bool
- Default: false
#### --swift-chunk-size
Above this size files will be chunked into a _segments container.

View file

@ -127,59 +127,28 @@ from filenames during upload.
Here are the standard options specific to zoho (Zoho).
#### --zoho-client-id
#### --zoho-region
OAuth Client Id
Leave blank normally.
Zoho region to connect to. You'll have to use the region you organization is registered in.
- Config: client_id
- Env Var: RCLONE_ZOHO_CLIENT_ID
- Type: string
- Default: ""
#### --zoho-client-secret
OAuth Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_ZOHO_CLIENT_SECRET
- Config: region
- Env Var: RCLONE_ZOHO_REGION
- Type: string
- Default: ""
- Examples:
- "com"
- United states / Global
- "eu"
- Europe
- "in"
- India
- "com.au"
- Australia
### Advanced Options
Here are the advanced options specific to zoho (Zoho).
#### --zoho-token
OAuth Access Token as a JSON blob.
- Config: token
- Env Var: RCLONE_ZOHO_TOKEN
- Type: string
- Default: ""
#### --zoho-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
- Env Var: RCLONE_ZOHO_AUTH_URL
- Type: string
- Default: ""
#### --zoho-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
- Env Var: RCLONE_ZOHO_TOKEN_URL
- Type: string
- Default: ""
#### --zoho-encoding
This sets the encoding for the backend.

2
go.sum
View file

@ -120,8 +120,6 @@ github.com/btcsuite/goleveldb v0.0.0-20160330041536-7834afc9e8cd/go.mod h1:F+uVa
github.com/btcsuite/snappy-go v0.0.0-20151229074030-0bdef8d06723/go.mod h1:8woku9dyThutzjeg+3xrA5iCpBRH8XEEg3lh6TiUghc=
github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792/go.mod h1:ghJtEyQwv5/p4Mg4C0fgbePVuGr935/5ddU9Z3TmDRY=
github.com/btcsuite/winsvc v1.0.0/go.mod h1:jsenWakMcC0zFBFurPLEAyrnc/teJEM1O46fmI40EZs=
github.com/buengese/sgzip v0.1.0 h1:Ti0JwfuRhcjZkFKk+RY+P+CtZ+puw9xjTqKgFgnfEsg=
github.com/buengese/sgzip v0.1.0/go.mod h1:i5ZiXGF3fhV7gL1xaRRL1nDnmpNj0X061FQzOS8VMas=
github.com/buengese/sgzip v0.1.1 h1:ry+T8l1mlmiWEsDrH/YHZnCVWD2S3im1KLsyO+8ZmTU=
github.com/buengese/sgzip v0.1.1/go.mod h1:i5ZiXGF3fhV7gL1xaRRL1nDnmpNj0X061FQzOS8VMas=
github.com/calebcase/tmpfile v1.0.2-0.20200602150926-3af473ef8439/go.mod h1:iErLeG/iqJr8LaQ/gYRv4GXdqssi3jg4iSzvrA06/lw=

29922
rclone.1 generated

File diff suppressed because it is too large Load diff