Version v1.64.0

This commit is contained in:
Nick Craig-Wood 2023-09-11 15:59:44 +01:00
parent a5a61f4874
commit 77f7bb08af
42 changed files with 59648 additions and 46222 deletions

29365
MANUAL.html generated

File diff suppressed because it is too large Load diff

5683
MANUAL.md generated

File diff suppressed because it is too large Load diff

30877
MANUAL.txt generated

File diff suppressed because it is too large Load diff

View file

@ -25,6 +25,7 @@ docs = [
"flags.md",
"docker.md",
"bisync.md",
"release_signing.md",
# Keep these alphabetical by full name
"fichier.md",

View file

@ -737,10 +737,7 @@ Properties:
#### --azureblob-memory-pool-flush-time
How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
How often internal memory buffer pools will be flushed. (no longer used)
Properties:
@ -751,7 +748,7 @@ Properties:
#### --azureblob-memory-pool-use-mmap
Whether to use mmap buffers in internal memory pool.
Whether to use mmap buffers in internal memory pool. (no longer used)
Properties:

View file

@ -492,6 +492,24 @@ Properties:
- Type: SizeSuffix
- Default: 96Mi
#### --b2-upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--b2-upload-concurrency" chunks stored at once
in memory.
Properties:
- Config: upload_concurrency
- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY
- Type: int
- Default: 16
#### --b2-disable-checksum
Disable checksums for large (> upload cutoff) files.
@ -550,9 +568,7 @@ Properties:
#### --b2-memory-pool-flush-time
How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
How often internal memory buffer pools will be flushed. (no longer used)
Properties:
@ -563,7 +579,7 @@ Properties:
#### --b2-memory-pool-use-mmap
Whether to use mmap buffers in internal memory pool.
Whether to use mmap buffers in internal memory pool. (no longer used)
Properties:

View file

@ -438,6 +438,28 @@ Properties:
- Type: string
- Required: false
#### --box-impersonate
Impersonate this user ID when using a service account.
Settng this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for
any user via the GET /users endpoint, which is only available to
admins, or by calling the GET /users/me endpoint with an authenticated
user session.
See: https://developer.box.com/guides/authentication/jwt/as-user/
Properties:
- Config: impersonate
- Env Var: RCLONE_BOX_IMPERSONATE
- Type: string
- Required: false
#### --box-encoding
The encoding for the backend.

View file

@ -5,6 +5,140 @@ description: "Rclone Changelog"
# Changelog
## v1.64.0 - 2023-09-11
[See commits](https://github.com/rclone/rclone/compare/v1.63.0...v1.64.0)
* New backends
* [Proton Drive](/protondrive/) (Chun-Hung Tseng)
* [Quatrix](/quatrix/) (Oksana, Volodymyr Kit)
* New S3 providers
* [Synology C2](/s3/#synology-c2) (BakaWang)
* [Leviia](/s3/#leviia) (Benjamin)
* New Jottacloud providers
* [Onlime](/jottacloud/) (Fjodor42)
* [Telia Sky](/jottacloud/) (NoLooseEnds)
* Major changes
* Multi-thread transfers (Vitor Gomes, Nick Craig-Wood, Manoj Ghosh, Edwin Mackenzie-Owen)
* Multi-thread transfers are now available when transferring to:
* `local`, `s3`, `azureblob`, `b2`, `oracleobjectstorage` and `smb`
* This greatly improves transfer speed between two network sources.
* In memory buffering has been unified between all backends and should share memory better.
* See [--multi-thread docs](/docs/#multi-thread-cutoff) for more info
* New commands
* `rclone config redacted` support mechanism for showing redacted config (Nick Craig-Wood)
* New Features
* accounting
* Show server side stats in own lines and not as bytes transferred (Nick Craig-Wood)
* bisync
* Add new `--ignore-listing-checksum` flag to distinguish from `--ignore-checksum` (nielash)
* Add experimental `--resilient` mode to allow recovery from self-correctable errors (nielash)
* Add support for `--create-empty-src-dirs` (nielash)
* Dry runs no longer commit filter changes (nielash)
* Enforce `--check-access` during `--resync` (nielash)
* Apply filters correctly during deletes (nielash)
* Equality check before renaming (leave identical files alone) (nielash)
* Fix `dryRun` rc parameter being ignored (nielash)
* build
* Update to `go1.21` and make `go1.19` the minimum required version (Anagh Kumar Baranwal, Nick Craig-Wood)
* Update dependencies (Nick Craig-Wood)
* Add snap installation (hideo aoyama)
* Change Winget Releaser job to `ubuntu-latest` (sitiom)
* cmd: Refactor and use sysdnotify in more commands (eNV25)
* config: Add `--multi-thread-chunk-size` flag (Vitor Gomes)
* doc updates (antoinetran, Benjamin, Bjørn Smith, Dean Attali, gabriel-suela, James Braza, Justin Hellings, kapitainsky, Mahad, Masamune3210, Nick Craig-Wood, Nihaal Sangha, Niklas Hambüchen, Raymond Berger, r-ricci, Sawada Tsunayoshi, Tiago Boeing, Vladislav Vorobev)
* fs
* Use atomic types everywhere (Roberto Ricci)
* When `--max-transfer` limit is reached exit with code (10) (kapitainsky)
* Add rclone completion powershell - basic implementation only (Nick Craig-Wood)
* http servers: Allow CORS to be set with `--allow-origin` flag (yuudi)
* lib/rest: Remove unnecessary `nil` check (Eng Zer Jun)
* ncdu: Add keybinding to rescan filesystem (eNV25)
* rc
* Add `executeId` to job listings (yuudi)
* Add `core/du` to measure local disk usage (Nick Craig-Wood)
* Add `operations/settier` to API (Drew Stinnett)
* rclone test info: Add `--check-base32768` flag to check can store all base32768 characters (Nick Craig-Wood)
* rmdirs: Remove directories concurrently controlled by `--checkers` (Nick Craig-Wood)
* Bug Fixes
* accounting: Don't stop calculating average transfer speed until the operation is complete (Jacob Hands)
* fs: Fix `transferTime` not being set in JSON logs (Jacob Hands)
* fshttp: Fix `--bind 0.0.0.0` allowing IPv6 and `--bind ::0` allowing IPv4 (Nick Craig-Wood)
* operations: Fix overlapping check on case insensitive file systems (Nick Craig-Wood)
* serve dlna: Fix MIME type if backend can't identify it (Nick Craig-Wood)
* serve ftp: Fix race condition when using the auth proxy (Nick Craig-Wood)
* serve sftp: Fix hash calculations with `--vfs-cache-mode full` (Nick Craig-Wood)
* serve webdav: Fix error: Expecting fs.Object or fs.Directory, got `nil` (Nick Craig-Wood)
* sync: Fix lockup with `--cutoff-mode=soft` and `--max-duration` (Nick Craig-Wood)
* Mount
* fix: Mount parsing for linux (Anagh Kumar Baranwal)
* VFS
* Add `--vfs-cache-min-free-space` to control minimum free space on the disk containing the cache (Nick Craig-Wood)
* Added cache cleaner for directories to reduce memory usage (Anagh Kumar Baranwal)
* Update parent directory modtimes on vfs actions (David Pedersen)
* Keep virtual directory status accurate and reduce deadlock potential (Anagh Kumar Baranwal)
* Make sure struct field is aligned for atomic access (Roberto Ricci)
* Local
* Rmdir return an error if the path is not a dir (zjx20)
* Azure Blob
* Implement `OpenChunkWriter` and multi-thread uploads (Nick Craig-Wood)
* Fix creation of directory markers (Nick Craig-Wood)
* Fix purging with directory markers (Nick Craig-Wood)
* B2
* Implement `OpenChunkWriter` and multi-thread uploads (Nick Craig-Wood)
* Fix rclone link when object path contains special characters (Alishan Ladhani)
* Box
* Add polling support (David Sze)
* Add `--box-impersonate` to impersonate a user ID (Nick Craig-Wood)
* Fix unhelpful decoding of error messages into decimal numbers (Nick Craig-Wood)
* Chunker
* Update documentation to mention issue with small files (Ricardo D'O. Albanus)
* Compress
* Fix ChangeNotify (Nick Craig-Wood)
* Drive
* Add `--drive-fast-list-bug-fix` to control ListR bug workaround (Nick Craig-Wood)
* Fichier
* Implement `DirMove` (Nick Craig-Wood)
* Fix error code parsing (alexia)
* FTP
* Add socks_proxy support for SOCKS5 proxies (Zach)
* Fix 425 "TLS session of data connection not resumed" errors (Nick Craig-Wood)
* Hdfs
* Retry "replication in progress" errors when uploading (Nick Craig-Wood)
* Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
* HTTP
* CORS should not be sent if not set (yuudi)
* Fix webdav OPTIONS response (yuudi)
* Opendrive
* Fix List on a just deleted and remade directory (Nick Craig-Wood)
* Oracleobjectstorage
* Use rclone's rate limiter in mutipart transfers (Manoj Ghosh)
* Implement `OpenChunkWriter` and multi-thread uploads (Manoj Ghosh)
* S3
* Refactor multipart upload to use `OpenChunkWriter` and `ChunkWriter` (Vitor Gomes)
* Factor generic multipart upload into `lib/multipart` (Nick Craig-Wood)
* Fix purging of root directory with `--s3-directory-markers` (Nick Craig-Wood)
* Add `rclone backend set` command to update the running config (Nick Craig-Wood)
* Add `rclone backend restore-status` command (Nick Craig-Wood)
* SFTP
* Stop uploads re-using the same ssh connection to improve performance (Nick Craig-Wood)
* Add `--sftp-ssh` to specify an external ssh binary to use (Nick Craig-Wood)
* Add socks_proxy support for SOCKS5 proxies (Zach)
* Support dynamic `--sftp-path-override` (nielash)
* Fix spurious warning when using `--sftp-ssh` (Nick Craig-Wood)
* Smb
* Implement multi-threaded writes for copies to smb (Edwin Mackenzie-Owen)
* Storj
* Performance improvement for large file uploads (Kaloyan Raev)
* Swift
* Fix HEADing 0-length objects when `--swift-no-large-objects` set (Julian Lepinski)
* Union
* Add `:writback` to act as a simple cache (Nick Craig-Wood)
* WebDAV
* Nextcloud: fix segment violation in low-level retry (Paul)
* Zoho
* Remove Range requests workarounds to fix integration tests (Nick Craig-Wood)
## v1.63.1 - 2023-07-17
[See commits](https://github.com/rclone/rclone/compare/v1.63.0...v1.63.1)

View file

@ -54,8 +54,6 @@ rclone [flags]
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key
--azureblob-list-chunk int Size of blob list (default 5000)
--azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any
--azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any
@ -81,9 +79,8 @@ rclone [flags]
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-concurrency int Concurrency for multipart uploads (default 16)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
@ -97,6 +94,7 @@ rclone [flags]
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
--box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
@ -130,7 +128,7 @@ rclone [flags]
--cache-writes Cache file data on writes through the FS
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
@ -181,6 +179,7 @@ rclone [flags]
--drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
--drive-formats string Deprecated: See export_formats
--drive-impersonate string Impersonate this user when using a service account
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
@ -434,8 +433,9 @@ rclone [flags]
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--netstorage-account string Set the NetStorage account name
--netstorage-host string Domain+path of NetStorage host to connect to
@ -470,6 +470,7 @@ rclone [flags]
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Object storage compartment OCID
--oos-config-file string Path to OCI config file (default "~/.oci/config")
@ -479,7 +480,8 @@ rclone [flags]
--oos-disable-checksum Don't store MD5 checksum with object metadata
--oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--oos-namespace string Object storage namespace
--oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
--oos-provider string Choose your Auth Provider (default "env_auth")
@ -532,10 +534,11 @@ rclone [flags]
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
--protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton drive account (obscured)
--protondrive-password string The password of your proton account (obscured)
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton drive account
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
@ -552,6 +555,13 @@ rclone [flags]
--qingstor-upload-concurrency int Concurrency for multipart uploads (default 1)
--qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
--quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
--quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi)
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572])
@ -604,8 +614,6 @@ rclone [flags]
--s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto
--s3-location-constraint string Location constraint - must be set to match the Region
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--s3-might-gzip Tristate Set this if the backend might gzip objects (default unset)
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
@ -776,7 +784,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0-beta.7196.08e40f21b.fix-flag-groups")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)

View file

@ -33,17 +33,20 @@ rclone bisync remote1:path1 remote2:path2 [flags]
## Options
```
--check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
--check-filename string Filename for --check-access (default: RCLONE_TEST)
--check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true")
--filters-file string Read filtering patterns from a file
--force Bypass --max-delete safety check and run the sync. Consider using with --verbose
-h, --help help for bisync
--localtime Use local time in listings (default: UTC)
--no-cleanup Retain working files (useful for troubleshooting and testing).
--remove-empty-dirs Remove empty directories at the final cleanup step.
-1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.
--workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync)
--check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
--check-filename string Filename for --check-access (default: RCLONE_TEST)
--check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true")
--create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
--filters-file string Read filtering patterns from a file
--force Bypass --max-delete safety check and run the sync. Consider using with --verbose
-h, --help help for bisync
--ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
--localtime Use local time in listings (default: UTC)
--no-cleanup Retain working files (useful for troubleshooting and testing).
--remove-empty-dirs Remove ALL empty directories at the final cleanup step.
--resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!
-1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.
--workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync)
```
@ -53,7 +56,7 @@ Flags for anything which can Copy a file.
```
--check-first Do all the checks before starting transfers
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
@ -69,8 +72,9 @@ Flags for anything which can Copy a file.
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy

View file

@ -88,7 +88,7 @@ Flags for anything which can Copy a file.
```
--check-first Do all the checks before starting transfers
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
@ -104,8 +104,9 @@ Flags for anything which can Copy a file.
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy

View file

@ -60,7 +60,7 @@ Flags for anything which can Copy a file.
```
--check-first Do all the checks before starting transfers
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
@ -76,8 +76,9 @@ Flags for anything which can Copy a file.
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy

View file

@ -543,12 +543,13 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -565,14 +566,15 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
@ -838,6 +840,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match

View file

@ -64,7 +64,7 @@ Flags for anything which can Copy a file.
```
--check-first Do all the checks before starting transfers
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
@ -80,8 +80,9 @@ Flags for anything which can Copy a file.
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy

View file

@ -63,7 +63,7 @@ Flags for anything which can Copy a file.
```
--check-first Do all the checks before starting transfers
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
@ -79,8 +79,9 @@ Flags for anything which can Copy a file.
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy

View file

@ -44,6 +44,7 @@ press '?' to toggle the help on and off. The supported keys are:
y copy current path to clipboard
Y display current path
^L refresh screen (fix screen corruption)
r recalculate file sizes
? to toggle help on and off
q/ESC/^c to quit

View file

@ -27,7 +27,10 @@ empty directories in. For example the [delete](/commands/rclone_delete/)
command will delete files but leave the directory structure (unless
used with option `--rmdirs`).
To delete a path and any objects in it, use [purge](/commands/rclone_purge/)
This will delete `--checkers` directories concurrently so
if you have thousands of empty directories consider increasing this number.
To delete a path and any objects in it, use the [purge](/commands/rclone_purge/)
command.

View file

@ -13,9 +13,10 @@ Update the rclone binary.
## Synopsis
This command downloads the latest release of rclone and replaces
the currently running binary. The download is verified with a hashsum
and cryptographically signed signature.
This command downloads the latest release of rclone and replaces the
currently running binary. The download is verified with a hashsum and
cryptographically signed signature; see [the release signing
docs](/release_signing/) for details.
If used without flags (or with implied `--stable` flag), this command
will install the latest stable release. However, some issues may be fixed
@ -48,7 +49,7 @@ your OS) to update these too. This command with the default `--package zip`
will update only the rclone executable so the local manual may become
inaccurate after it.
The `rclone mount` command (https://rclone.org/commands/rclone_mount/) may
The [rclone mount](/commands/rclone_mount/) command may
or may not support extended FUSE options depending on the build and OS.
`selfupdate` will refuse to update if the capability would be discarded.

View file

@ -111,12 +111,13 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -133,14 +134,15 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
@ -393,6 +395,7 @@ rclone serve dlna remote:path [flags]
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match

View file

@ -127,12 +127,13 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -149,14 +150,15 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
@ -427,6 +429,7 @@ rclone serve docker [flags]
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match

View file

@ -108,12 +108,13 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -130,14 +131,15 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
@ -474,6 +476,7 @@ rclone serve ftp remote:path [flags]
--user string User name for authentication (default "anonymous")
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match

View file

@ -198,12 +198,13 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -220,14 +221,15 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
@ -573,6 +575,7 @@ rclone serve http remote:path [flags]
--user string User name for authentication
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match

View file

@ -140,12 +140,13 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -162,14 +163,15 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
@ -506,6 +508,7 @@ rclone serve sftp remote:path [flags]
--user string User name for authentication
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match

View file

@ -227,12 +227,13 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache is separate from the cache backend and you may
find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -249,14 +250,15 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
@ -604,6 +606,7 @@ rclone serve webdav remote:path [flags]
--user string User name for authentication
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match

View file

@ -67,7 +67,7 @@ Flags for anything which can Copy a file.
```
--check-first Do all the checks before starting transfers
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
@ -83,8 +83,9 @@ Flags for anything which can Copy a file.
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy

View file

@ -28,6 +28,7 @@ rclone test info [remote:path]+ [flags]
```
--all Run all tests
--check-base32768 Check can store all possible base32768 characters
--check-control Check control characters
--check-length Check max filename length
--check-normalization Check UTF-8 Normalization

View file

@ -600,7 +600,7 @@ Properties:
- Encode using base64. Suitable for case sensitive remote.
- "base32768"
- Encode using base32768. Suitable if your remote counts UTF-16 or
- Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox, Box)
- Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox)
#### --crypt-suffix

View file

@ -1548,7 +1548,7 @@ but not `OpenChunkWriter`) don't have a natural chunk size.
In this case the value of this option is used (default 64Mi).
### --multi-thread-cutoff=SIZE ###
### --multi-thread-cutoff=SIZE {#multi-thread-cutoff}
When transferring files above SIZE to capable backends, rclone will
use multiple threads to transfer the file (default 256M).

View file

@ -1194,7 +1194,7 @@ This resource key requirement only applies to a subset of old files.
Note also that opening the folder once in the web interface (with the
user you've authenticated rclone with) seems to be enough so that the
resource key is no needed.
resource key is not needed.
Properties:
@ -1204,6 +1204,34 @@ Properties:
- Type: string
- Required: false
#### --drive-fast-list-bug-fix
Work around a bug in Google Drive listing.
Normally rclone will work around a bug in Google Drive when using
--fast-list (ListR) where the search "(A in parents) or (B in
parents)" returns nothing sometimes. See #3114, #4289 and
https://issuetracker.google.com/issues/149522397
Rclone detects this by finding no items in more than one directory
when listing and retries them as lists of individual directories.
This means that if you have a lot of empty directories rclone will end
up listing them all individually and this can take many more API
calls.
This flag allows the work-around to be disabled. This is **not**
recommended in normal use - only if you have a particular case you are
having trouble with like many empty directories.
Properties:
- Config: fast_list_bug_fix
- Env Var: RCLONE_DRIVE_FAST_LIST_BUG_FIX
- Type: bool
- Default: true
#### --drive-encoding
The encoding for the backend.

View file

@ -15,7 +15,7 @@ Flags for anything which can Copy a file.
```
--check-first Do all the checks before starting transfers
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
@ -31,8 +31,9 @@ Flags for anything which can Copy a file.
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
@ -110,7 +111,7 @@ General networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0-beta.7196.08e40f21b.fix-flag-groups")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0")
```
@ -318,8 +319,6 @@ Backend only flags. These can be set in the config file also.
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key
--azureblob-list-chunk int Size of blob list (default 5000)
--azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any
--azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any
@ -345,9 +344,8 @@ Backend only flags. These can be set in the config file also.
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-concurrency int Concurrency for multipart uploads (default 16)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
@ -359,6 +357,7 @@ Backend only flags. These can be set in the config file also.
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
--box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
@ -418,6 +417,7 @@ Backend only flags. These can be set in the config file also.
--drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
--drive-formats string Deprecated: See export_formats
--drive-impersonate string Impersonate this user when using a service account
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
@ -636,6 +636,7 @@ Backend only flags. These can be set in the config file also.
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Object storage compartment OCID
--oos-config-file string Path to OCI config file (default "~/.oci/config")
@ -645,7 +646,8 @@ Backend only flags. These can be set in the config file also.
--oos-disable-checksum Don't store MD5 checksum with object metadata
--oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--oos-namespace string Object storage namespace
--oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
--oos-provider string Choose your Auth Provider (default "env_auth")
@ -694,10 +696,11 @@ Backend only flags. These can be set in the config file also.
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
--protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton drive account (obscured)
--protondrive-password string The password of your proton account (obscured)
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton drive account
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
@ -714,6 +717,13 @@ Backend only flags. These can be set in the config file also.
--qingstor-upload-concurrency int Concurrency for multipart uploads (default 1)
--qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
--quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
--quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi)
--s3-access-key-id string AWS Access Key ID
--s3-acl string Canned ACL used when creating buckets and storing or copying objects
--s3-bucket-acl string Canned ACL used when creating buckets
@ -734,8 +744,6 @@ Backend only flags. These can be set in the config file also.
--s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto
--s3-location-constraint string Location constraint - must be set to match the Region
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--s3-might-gzip Tristate Set this if the backend might gzip objects (default unset)
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity

View file

@ -415,6 +415,24 @@ Properties:
- Type: bool
- Default: false
#### --ftp-socks-proxy
Socks 5 proxy host.
Supports the format user:pass@host:port, user@host:port, host:port.
Example:
myUser:myPass@localhost:9005
Properties:
- Config: socks_proxy
- Env Var: RCLONE_FTP_SOCKS_PROXY
- Type: string
- Required: false
#### --ftp-encoding
The encoding for the backend.

View file

@ -305,10 +305,77 @@ command which will display your usage limit (unless it is unlimited)
and the current usage.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/jottacloud/jottacloud.go then run make backenddocs" >}}
### Standard options
Here are the Standard options specific to jottacloud (Jottacloud).
#### --jottacloud-client-id
OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_JOTTACLOUD_CLIENT_ID
- Type: string
- Required: false
#### --jottacloud-client-secret
OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_JOTTACLOUD_CLIENT_SECRET
- Type: string
- Required: false
### Advanced options
Here are the Advanced options specific to jottacloud (Jottacloud).
#### --jottacloud-token
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_JOTTACLOUD_TOKEN
- Type: string
- Required: false
#### --jottacloud-auth-url
Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_JOTTACLOUD_AUTH_URL
- Type: string
- Required: false
#### --jottacloud-token-url
Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_JOTTACLOUD_TOKEN_URL
- Type: string
- Required: false
#### --jottacloud-md5-memory-limit
Files bigger than this will be cached on disk to calculate the MD5 if required.

View file

@ -387,7 +387,7 @@ Assume the Stat size of links is zero (and read them instead) (deprecated).
Rclone used to use the Stat size of links as the link size, but this fails in quite a few places:
- Windows
- On some virtual filesystems (such as LucidLink)
- On some virtual filesystems (such ash LucidLink)
- Android
So rclone now always reads the link.
@ -562,7 +562,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_LOCAL_ENCODING
- Type: MultiEncoder
- Default: Slash,InvalidUtf8,Dot
- Default: Slash,Dot
### Metadata

View file

@ -174,6 +174,32 @@ as they can't be used in JSON strings.
Here are the Standard options specific to mailru (Mail.ru Cloud).
#### --mailru-client-id
OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_MAILRU_CLIENT_ID
- Type: string
- Required: false
#### --mailru-client-secret
OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_MAILRU_CLIENT_SECRET
- Type: string
- Required: false
#### --mailru-user
User name (usually email).
@ -232,6 +258,43 @@ Properties:
Here are the Advanced options specific to mailru (Mail.ru Cloud).
#### --mailru-token
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_MAILRU_TOKEN
- Type: string
- Required: false
#### --mailru-auth-url
Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_MAILRU_AUTH_URL
- Type: string
- Required: false
#### --mailru-token-url
Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_MAILRU_TOKEN_URL
- Type: string
- Required: false
#### --mailru-speedup-file-patterns
Comma separated list of file name patterns eligible for speedup (put by hash).

View file

@ -108,6 +108,32 @@ as they can't be used in JSON strings.
Here are the Standard options specific to premiumizeme (premiumize.me).
#### --premiumizeme-client-id
OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID
- Type: string
- Required: false
#### --premiumizeme-client-secret
OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET
- Type: string
- Required: false
#### --premiumizeme-api-key
API Key.
@ -126,6 +152,43 @@ Properties:
Here are the Advanced options specific to premiumizeme (premiumize.me).
#### --premiumizeme-token
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_PREMIUMIZEME_TOKEN
- Type: string
- Required: false
#### --premiumizeme-auth-url
Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_PREMIUMIZEME_AUTH_URL
- Type: string
- Required: false
#### --premiumizeme-token-url
Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL
- Type: string
- Required: false
#### --premiumizeme-encoding
The encoding for the backend.

View file

@ -130,7 +130,7 @@ Here are the Standard options specific to protondrive (Proton Drive).
#### --protondrive-username
The username of your proton drive account
The username of your proton account
Properties:
@ -141,7 +141,7 @@ Properties:
#### --protondrive-password
The password of your proton drive account.
The password of your proton account.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -172,6 +172,68 @@ Properties:
Here are the Advanced options specific to protondrive (Proton Drive).
#### --protondrive-mailbox-password
The mailbox password of your two-password proton account.
For more information regarding the mailbox password, please check the
following official knowledge base article:
https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: mailbox_password
- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD
- Type: string
- Required: false
#### --protondrive-client-uid
Client uid key (internal use only)
Properties:
- Config: client_uid
- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID
- Type: string
- Required: false
#### --protondrive-client-access-token
Client access token key (internal use only)
Properties:
- Config: client_access_token
- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN
- Type: string
- Required: false
#### --protondrive-client-refresh-token
Client refresh token key (internal use only)
Properties:
- Config: client_refresh_token
- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN
- Type: string
- Required: false
#### --protondrive-client-salted-key-pass
Client salted key pass key (internal use only)
Properties:
- Config: client_salted_key_pass
- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS
- Type: string
- Required: false
#### --protondrive-encoding
The encoding for the backend.

View file

@ -115,10 +115,77 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/putio/putio.go then run make backenddocs" >}}
### Standard options
Here are the Standard options specific to putio (Put.io).
#### --putio-client-id
OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_PUTIO_CLIENT_ID
- Type: string
- Required: false
#### --putio-client-secret
OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_PUTIO_CLIENT_SECRET
- Type: string
- Required: false
### Advanced options
Here are the Advanced options specific to putio (Put.io).
#### --putio-token
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_PUTIO_TOKEN
- Type: string
- Required: false
#### --putio-auth-url
Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_PUTIO_AUTH_URL
- Type: string
- Required: false
#### --putio-token-url
Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_PUTIO_TOKEN_URL
- Type: string
- Required: false
#### --putio-encoding
The encoding for the backend.

View file

@ -732,6 +732,28 @@ OR
**Authentication is required for this call.**
### core/du: Returns disk usage of a locally attached disk. {#core-du}
This returns the disk usage for the local directory passed in as dir.
If the directory is not passed in, it defaults to the directory
pointed to by --cache-dir.
- dir - string (optional)
Returns:
```
{
"dir": "/",
"info": {
"Available": 361769115648,
"Free": 361785892864,
"Total": 982141468672
}
}
```
### core/gc: Runs a garbage collection. {#core-gc}
This tells the go runtime to do a garbage collection run. It isn't
@ -811,6 +833,10 @@ Returns the following values:
"lastError": last error string,
"renames" : number of files renamed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
"serverSideCopies": number of server side copies done,
"serverSideCopyBytes": number bytes server side copied,
"serverSideMoves": number of server side moves done,
"serverSideMoveBytes": number bytes server side moved,
"speed": average speed in bytes per second since start of the group,
"totalBytes": total number of bytes in the group,
"totalChecks": total number of checks in the group,
@ -1012,7 +1038,8 @@ Parameters: None.
Results:
- jobids - array of integer job ids.
- executeId - string id of rclone executing (change after restart)
- jobids - array of integer job ids (starting at 1 on each restart)
### job/status: Reads the status of the job ID {#job-status}
@ -1415,6 +1442,27 @@ See the [rmdirs](/commands/rclone_rmdirs/) command for more information on the a
**Authentication is required for this call.**
### operations/settier: Changes storage tier or class on all files in the path {#operations-settier}
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
See the [settier](/commands/rclone_settier/) command for more information on the above.
**Authentication is required for this call.**
### operations/settierfile: Changes storage tier or class on the single file pointed to {#operations-settierfile}
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
See the [settierfile](/commands/rclone_settierfile/) command for more information on the above.
**Authentication is required for this call.**
### operations/size: Count the number of bytes and files in remote {#operations-size}
This takes the following parameters:
@ -1654,13 +1702,13 @@ This takes the following parameters
- checkSync - `true` by default, `false` disables comparison of final listings,
`only` will skip sync, only compare listings from the last run
- createEmptySrcDirs - Sync creation and deletion of empty directories.
(Not compatible with --remove-empty-dirs)
(Not compatible with --remove-empty-dirs)
- removeEmptyDirs - remove empty directories at the final cleanup step
- filtersFile - read filtering patterns from a file
- ignoreListingChecksum - Do not use checksums for listings
- resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync.
Use at your own risk!
- workdir - Use custom working directory (default: `~/.cache/rclone/bisync`)
- workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync)
- noCleanup - retain working files
See [bisync command help](https://rclone.org/commands/rclone_bisync/)

View file

@ -664,7 +664,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
#### --s3-provider
@ -705,6 +705,8 @@ Properties:
- IONOS Cloud
- "LyveCloud"
- Seagate Lyve Cloud
- "Leviia"
- Leviia Object Storage
- "Liara"
- Liara Object Storage
- "Minio"
@ -1078,6 +1080,30 @@ Properties:
#### --s3-region
Region where your data stored.
Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
- Provider: Synology
- Type: string
- Required: false
- Examples:
- "eu-001"
- Europe Region 1
- "eu-002"
- Europe Region 2
- "us-001"
- US Region 1
- "us-002"
- US Region 2
- "tw-001"
- Asia (Taiwan)
#### --s3-region
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
@ -1392,6 +1418,22 @@ Properties:
#### --s3-endpoint
Endpoint for Leviia Object Storage API.
Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider: Leviia
- Type: string
- Required: false
- Examples:
- "s3.leviia.com"
- The default endpoint
- Leviia
#### --s3-endpoint
Endpoint for Liara Object Storage API.
Properties:
@ -1593,15 +1635,15 @@ Properties:
- Required: false
- Examples:
- "eu-001.s3.synologyc2.net"
- Europe Region 1
- EU Endpoint 1
- "eu-002.s3.synologyc2.net"
- Europe Region 2
- EU Endpoint 2
- "us-001.s3.synologyc2.net"
- US Region 1
- US Endpoint 1
- "us-002.s3.synologyc2.net"
- US Region 2
- US Endpoint 2
- "tw-001.s3.synologyc2.net"
- Asia Region (Taiwan)
- TW Endpoint 1
#### --s3-endpoint
@ -2130,7 +2172,7 @@ Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox
- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox
- Type: string
- Required: false
@ -2153,7 +2195,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
- Provider: !Storj,Cloudflare
- Provider: !Storj,Synology,Cloudflare
- Type: string
- Required: false
- Examples:
@ -2408,7 +2450,7 @@ Properties:
### Advanced options
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
#### --s3-bucket-acl
@ -2906,10 +2948,7 @@ Properties:
#### --s3-memory-pool-flush-time
How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
How often internal memory buffer pools will be flushed. (no longer used)
Properties:
@ -2920,7 +2959,7 @@ Properties:
#### --s3-memory-pool-use-mmap
Whether to use mmap buffers in internal memory pool.
Whether to use mmap buffers in internal memory pool. (no longer used)
Properties:
@ -3186,17 +3225,17 @@ to normal storage.
Usage Examples:
rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
All the objects shown will be marked for restore, then
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
It returns a list of status dictionaries with Remote and Status
keys. The Status will be OK if it was successful or an error message
@ -3205,11 +3244,11 @@ if not.
[
{
"Status": "OK",
"Path": "test.txt"
"Remote": "test.txt"
},
{
"Status": "OK",
"Path": "test/file4.txt"
"Remote": "test/file4.txt"
}
]
@ -3221,6 +3260,51 @@ Options:
- "lifetime": Lifetime of the active copy in days
- "priority": Priority of restore: Standard|Expedited|Bulk
### restore-status
Show the restore status for objects being restored from GLACIER to normal storage
rclone backend restore-status remote: [options] [<arguments>+]
This command can be used to show the status for objects being restored from GLACIER
to normal storage.
Usage Examples:
rclone backend restore-status s3:bucket/path/to/object
rclone backend restore-status s3:bucket/path/to/directory
rclone backend restore-status -o all s3:bucket/path/to/directory
This command does not obey the filters.
It returns a list of status dictionaries.
[
{
"Remote": "file.txt",
"VersionID": null,
"RestoreStatus": {
"IsRestoreInProgress": true,
"RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
},
"StorageClass": "GLACIER"
},
{
"Remote": "test.pdf",
"VersionID": null,
"RestoreStatus": {
"IsRestoreInProgress": false,
"RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
},
"StorageClass": "DEEP_ARCHIVE"
}
]
Options:
- "all": if set then show all objects, not just ones with restore status
### list-multipart-uploads
List the unfinished multipart uploads
@ -3315,6 +3399,30 @@ It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning
has been enabled the status can't be set back to "Unversioned".
### set
Set command for updating the config parameters.
rclone backend set remote: [options] [<arguments>+]
This set command can be used to update the config parameters
for a running s3 backend.
Usage Examples:
rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X
The option keys are named as they are in the config file.
This rebuilds the connection to the s3 backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.
{{< rem autogenerated options stop >}}
### Anonymous access to public buckets

View file

@ -556,6 +556,42 @@ Properties:
- Type: bool
- Default: false
#### --sftp-ssh
Path and arguments to external ssh binary.
Normally rclone will use its internal ssh library to connect to the
SFTP server. However it does not implement all possible ssh options so
it may be desirable to use an external ssh binary.
Rclone ignores all the internal config if you use this option and
expects you to configure the ssh binary with the user/host/port and
any other options you need.
**Important** The ssh command must log in without asking for a
password so needs to be configured with keys or certificates.
Rclone will run the command supplied either with the additional
arguments "-s sftp" to access the SFTP subsystem or with commands such
as "md5sum /path/to/file" appended to read checksums.
Any arguments with spaces in should be surrounded by "double quotes".
An example setting might be:
ssh -o ServerAliveInterval=20 user@example.com
Note that when using an external ssh binary rclone makes a new ssh
connection for every hash it calculates.
Properties:
- Config: ssh
- Env Var: RCLONE_SFTP_SSH
- Type: SpaceSepList
- Default:
### Advanced options
Here are the Advanced options specific to sftp (SSH/SFTP).
@ -608,6 +644,18 @@ E.g. if shared folders can be found in directories representing volumes:
E.g. if home directory can be found in a shared folder called "home":
rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory
To specify only the path to the SFTP remote's root, and allow rclone to add any relative subpaths automatically (including unwrapping/decrypting remotes as necessary), add the '@' character to the beginning of the path.
E.g. the first example above could be rewritten as:
rclone sync /home/local/directory remote:/directory --sftp-path-override @/volume2
Note that when using this method with Synology "home" folders, the full "/homes/USER" path should be specified instead of "/home".
E.g. the second example above should be rewritten as:
rclone sync /home/local/directory remote:/homes/USER/directory --sftp-path-override @/volume1
Properties:
@ -703,6 +751,15 @@ Specifies the path or command to run a sftp server on the remote host.
The subsystem option is ignored when server_command is defined.
If adding server_command to the configuration file please note that
it should not be enclosed in quotes, since that will make rclone fail.
A working example is:
[remote_name]
type = sftp
server_command = sudo /usr/libexec/openssh/sftp-server
Properties:
- Config: server_command
@ -941,6 +998,24 @@ Properties:
- Type: SpaceSepList
- Default:
#### --sftp-socks-proxy
Socks 5 proxy host.
Supports the format user:pass@host:port, user@host:port, host:port.
Example:
myUser:myPass@localhost:9005
Properties:
- Config: socks_proxy
- Env Var: RCLONE_SFTP_SOCKS_PROXY
- Type: string
- Required: false
{{< rem autogenerated options stop >}}
## Limitations

View file

@ -154,6 +154,32 @@ as they can't be used in JSON strings.
Here are the Standard options specific to sharefile (Citrix Sharefile).
#### --sharefile-client-id
OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_SHAREFILE_CLIENT_ID
- Type: string
- Required: false
#### --sharefile-client-secret
OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_SHAREFILE_CLIENT_SECRET
- Type: string
- Required: false
#### --sharefile-root-folder-id
ID of the root folder.
@ -183,6 +209,43 @@ Properties:
Here are the Advanced options specific to sharefile (Citrix Sharefile).
#### --sharefile-token
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_SHAREFILE_TOKEN
- Type: string
- Required: false
#### --sharefile-auth-url
Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_SHAREFILE_AUTH_URL
- Type: string
- Required: false
#### --sharefile-token-url
Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_SHAREFILE_TOKEN_URL
- Type: string
- Required: false
#### --sharefile-upload-cutoff
Cutoff for switching to multipart upload.

38665
rclone.1 generated

File diff suppressed because it is too large Load diff