Version v1.59.0

This commit is contained in:
Nick Craig-Wood 2022-07-09 18:08:20 +01:00
parent 1c4ee2feee
commit 00a684d877
101 changed files with 21596 additions and 3818 deletions

5068
MANUAL.html generated

File diff suppressed because it is too large Load diff

5223
MANUAL.md generated

File diff suppressed because it is too large Load diff

5274
MANUAL.txt generated

File diff suppressed because it is too large Load diff

View file

@ -91,7 +91,7 @@ Copy another local directory to the alias directory called source
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/alias/alias.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to alias (Alias for an existing remote).
Here are the Standard options specific to alias (Alias for an existing remote).
#### --alias-remote

View file

@ -160,7 +160,7 @@ rclone it will take you to an `amazon.com` page to log in. Your
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/amazonclouddrive/amazonclouddrive.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to amazon cloud drive (Amazon Drive).
Here are the Standard options specific to amazon cloud drive (Amazon Drive).
#### --acd-client-id
@ -190,7 +190,7 @@ Properties:
### Advanced options
Here are the advanced options specific to amazon cloud drive (Amazon Drive).
Here are the Advanced options specific to amazon cloud drive (Amazon Drive).
#### --acd-token

View file

@ -158,7 +158,7 @@ untrusted environment such as a CI build server.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).
#### --azureblob-account
@ -255,7 +255,7 @@ Properties:
### Advanced options
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).
#### --azureblob-msi-object-id

View file

@ -328,7 +328,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/b2/b2.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to b2 (Backblaze B2).
Here are the Standard options specific to b2 (Backblaze B2).
#### --b2-account
@ -365,7 +365,7 @@ Properties:
### Advanced options
Here are the advanced options specific to b2 (Backblaze B2).
Here are the Advanced options specific to b2 (Backblaze B2).
#### --b2-endpoint
@ -415,6 +415,20 @@ Properties:
- Type: bool
- Default: false
#### --b2-version-at
Show file versions as they were at the specified time.
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
Properties:
- Config: version_at
- Env Var: RCLONE_B2_VERSION_AT
- Type: Time
- Default: off
#### --b2-upload-cutoff
Cutoff for switching to chunked upload.

View file

@ -267,7 +267,7 @@ the `root_folder_id` in the config.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/box/box.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to box (Box).
Here are the Standard options specific to box (Box).
#### --box-client-id
@ -341,7 +341,7 @@ Properties:
### Advanced options
Here are the advanced options specific to box (Box).
Here are the Advanced options specific to box (Box).
#### --box-token

View file

@ -307,7 +307,7 @@ Params:
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/cache/cache.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to cache (Cache a remote).
Here are the Standard options specific to cache (Cache a remote).
#### --cache-remote
@ -423,7 +423,7 @@ Properties:
### Advanced options
Here are the advanced options specific to cache (Cache a remote).
Here are the Advanced options specific to cache (Cache a remote).
#### --cache-plex-token
@ -672,7 +672,7 @@ Run them with
The help below will explain what arguments each command takes.
See [the "rclone backend" command](/commands/rclone_backend/) for more
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command

View file

@ -5,6 +5,165 @@ description: "Rclone Changelog"
# Changelog
## v1.59.0 - 2022-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)
* New backends
* [Combine](/combine) multiple remotes in one directory tree (Nick Craig-Wood)
* [Hidrive](/hidrive/) (Ovidiu Victor Tatar)
* [Internet Archive](/internetarchive/) (Lesmiscore (Naoya Ozaki))
* New S3 providers
* [ArvanCloud AOS](/s3/#arvan-cloud) (ehsantdy)
* [Cloudflare R2](/s3/#cloudflare-r2) (Nick Craig-Wood)
* [Huawei OBS](/s3/#huawei-obs) (m00594701)
* [IDrive e2](/s3/#idrive-e2) (vyloy)
* New commands
* [test makefile](/commands/rclone_test_makefile/): Create a single file for testing (Nick Craig-Wood)
* New Features
* [Metadata framework](/docs/#metadata) to read and write system and user metadata on backends (Nick Craig-Wood)
* Implemented initially for `local`, `s3` and `internetarchive` backends
* `--metadata`/`-M` flag to control whether metadata is copied
* `--metadata-set` flag to specify metadata for uploads
* Thanks to [Manz Solutions](https://manz-solutions.at/) for sponsoring this work.
* build
* Update to go1.18 and make go1.16 the minimum required version (Nick Craig-Wood)
* Update android go build to 1.18.x and NDK to 23.1.7779620 (Nick Craig-Wood)
* All windows binaries now no longer CGO (Nick Craig-Wood)
* Add `linux/arm/v6` to docker images (Nick Craig-Wood)
* A huge number of fixes found with [staticcheck](https://staticcheck.io/) (albertony)
* Configurable version suffix independent of version number (albertony)
* check: Implement `--no-traverse` and `--no-unicode-normalization` (Nick Craig-Wood)
* config: Readability improvements (albertony)
* copyurl: Add `--header-filename` to honor the HTTP header filename directive (J-P Treen)
* filter: Allow multiple `--exclude-if-present` flags (albertony)
* fshttp: Add `--disable-http-keep-alives` to disable HTTP Keep Alives (Nick Craig-Wood)
* install.sh
* Set the modes on the files and/or directories on macOS (Michael C Tiernan - MIT-Research Computing Project)
* Pre verify sudo authorization `-v` before calling curl. (Michael C Tiernan - MIT-Research Computing Project)
* lib/encoder: Add Semicolon encoding (Nick Craig-Wood)
* lsf: Add metadata support with `M` flag (Nick Craig-Wood)
* lsjson: Add `--metadata`/`-M` flag (Nick Craig-Wood)
* ncdu
* Implement multi selection (CrossR)
* Replace termbox with tcell's termbox wrapper (eNV25)
* Display correct path in delete confirmation dialog (Roberto Ricci)
* operations
* Speed up hash checking by aborting the other hash if first returns nothing (Nick Craig-Wood)
* Use correct src/dst in some log messages (zzr93)
* rcat: Check checksums by default like copy does (Nick Craig-Wood)
* selfupdate: Replace deprecated `x/crypto/openpgp` package with `ProtonMail/go-crypto` (albertony)
* serve ftp: Check `--passive-port` arguments are correct (Nick Craig-Wood)
* size: Warn about inaccurate results when objects with unknown size (albertony)
* sync: Overlap check is now filter-sensitive so `--backup-dir` can be in the root provided it is filtered (Nick)
* test info: Check file name lengths using 1,2,3,4 byte unicode characters (Nick Craig-Wood)
* test makefile(s): `--sparse`, `--zero`, `--pattern`, `--ascii`, `--chargen` flags to control file contents (Nick Craig-Wood)
* Make sure we call the `Shutdown` method on backends (Martin Czygan)
* Bug Fixes
* accounting: Fix unknown length file transfers counting 3 transfers each (buda)
* ncdu: Fix issue where dir size is summed when file sizes are -1 (albertony)
* sync/copy/move
* Fix `--fast-list` `--create-empty-src-dirs` and `--exclude` (Nick Craig-Wood)
* Fix `--max-duration` and `--cutoff-mode soft` (Nick Craig-Wood)
* Fix fs cache unpin (Martin Czygan)
* Set proper exit code for errors that are not low-level retried (e.g. size/timestamp changing) (albertony)
* Mount
* Support `windows/arm64` (may still be problems - see [#5828](https://github.com/rclone/rclone/issues/5828)) (Nick Craig-Wood)
* Log IO errors at ERROR level (Nick Craig-Wood)
* Ignore `_netdev` mount argument (Hugal31)
* VFS
* Add `--vfs-fast-fingerprint` for less accurate but faster fingerprints (Nick Craig-Wood)
* Add `--vfs-disk-space-total-size` option to manually set the total disk space (Claudio Maradonna)
* vfscache: Fix fatal error: sync: unlock of unlocked mutex error (Nick Craig-Wood)
* Local
* Fix parsing of `--local-nounc` flag (Nick Craig-Wood)
* Add Metadata support (Nick Craig-Wood)
* Crypt
* Support metadata (Nick Craig-Wood)
* Azure Blob
* Calculate Chunksize/blocksize to stay below maxUploadParts (Leroy van Logchem)
* Use chunksize lib to determine chunksize dynamically (Derek Battams)
* Case insensitive access tier (Rob Pickerill)
* Allow remote emulator (azurite) (Lorenzo Maiorfi)
* B2
* Add `--b2-version-at` flag to show file versions at time specified (SwazRGB)
* Use chunksize lib to determine chunksize dynamically (Derek Battams)
* Chunker
* Mark as not supporting metadata (Nick Craig-Wood)
* Compress
* Support metadata (Nick Craig-Wood)
* Drive
* Make `backend config -o config` add a combined `AllDrives:` remote (Nick Craig-Wood)
* Make `--drive-shared-with-me` work with shared drives (Nick Craig-Wood)
* Add `--drive-resource-key` for accessing link-shared files (Nick Craig-Wood)
* Add backend commands `exportformats` and `importformats` for debugging (Nick Craig-Wood)
* Fix 404 errors on copy/server side copy objects from public folder (Nick Craig-Wood)
* Update Internal OAuth consent screen docs (Phil Shackleton)
* Moved `root_folder_id` to advanced section (Abhiraj)
* Dropbox
* Migrate from deprecated api (m8rge)
* Add logs to show when poll interval limits are exceeded (Nick Craig-Wood)
* Fix nil pointer exception on dropbox impersonate user not found (Nick Craig-Wood)
* Fichier
* Parse api error codes and them accordingly (buengese)
* FTP
* Add support for `disable_utf8` option (Jason Zheng)
* Revert to upstream `github.com/jlaffaye/ftp` from our fork (Nick Craig-Wood)
* Google Cloud Storage
* Add `--gcs-no-check-bucket` to minimise transactions and perms (Nick Gooding)
* Add `--gcs-decompress` flag to decompress gzip-encoded files (Nick Craig-Wood)
* by default these will be downloaded compressed (which previously failed)
* Hasher
* Support metadata (Nick Craig-Wood)
* HTTP
* Fix missing response when using custom auth handler (albertony)
* Jottacloud
* Add support for upload to custom device and mountpoint (albertony)
* Always store username in config and use it to avoid initial API request (albertony)
* Fix issue with server-side copy when destination is in trash (albertony)
* Fix listing output of remote with special characters (albertony)
* Mailru
* Fix timeout by using int instead of time.Duration for keeping number of seconds (albertony)
* Mega
* Document using MEGAcmd to help with login failures (Art M. Gallagher)
* Onedrive
* Implement `--poll-interval` for onedrive (Hugo Laloge)
* Add access scopes option (Sven Gerber)
* Opendrive
* Resolve lag and truncate bugs (Scott Grimes)
* Pcloud
* Fix about with no free space left (buengese)
* Fix cleanup (buengese)
* S3
* Use PUT Object instead of presigned URLs to upload single part objects (Nick Craig-Wood)
* Backend restore command to skip non-GLACIER objects (Vincent Murphy)
* Use chunksize lib to determine chunksize dynamically (Derek Battams)
* Retry RequestTimeout errors (Nick Craig-Wood)
* Implement reading and writing of metadata (Nick Craig-Wood)
* SFTP
* Add support for about and hashsum on windows server (albertony)
* Use vendor-specific VFS statistics extension for about if available (albertony)
* Add `--sftp-chunk-size` to control packets sizes for high latency links (Nick Craig-Wood)
* Add `--sftp-concurrency` to improve high latency transfers (Nick Craig-Wood)
* Add `--sftp-set-env` option to set environment variables (Nick Craig-Wood)
* Add Hetzner Storage Boxes to supported sftp backends (Anthrazz)
* Storj
* Fix put which lead to the file being unreadable when using mount (Erik van Velzen)
* Union
* Add `min_free_space` option for `lfs`/`eplfs` policies (Nick Craig-Wood)
* Fix uploading files to union of all bucket based remotes (Nick Craig-Wood)
* Fix get free space for remotes which don't support it (Nick Craig-Wood)
* Fix `eplus` policy to select correct entry for existing files (Nick Craig-Wood)
* Support metadata (Nick Craig-Wood)
* Uptobox
* Fix root path handling (buengese)
* WebDAV
* Add SharePoint in other specific regions support (Noah Hsu)
* Yandex
* Handle api error on server-side move (albertony)
* Zoho
* Add Japan and China regions (buengese)
## v1.58.1 - 2022-04-29
[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.58.1)

View file

@ -313,7 +313,7 @@ Changing `transactions` is dangerous and requires explicit migration.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/chunker/chunker.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to chunker (Transparently chunk/split large files).
Here are the Standard options specific to chunker (Transparently chunk/split large files).
#### --chunker-remote
@ -372,7 +372,7 @@ Properties:
### Advanced options
Here are the advanced options specific to chunker (Transparently chunk/split large files).
Here are the Advanced options specific to chunker (Transparently chunk/split large files).
#### --chunker-name-format

View file

@ -127,7 +127,7 @@ See [the Google Drive docs](/drive/#drives) for full info.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/combine/combine.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to combine (Combine several remotes into one).
Here are the Standard options specific to combine (Combine several remotes into one).
#### --combine-upstreams
@ -153,4 +153,10 @@ Properties:
- Type: SpaceSepList
- Default:
### Metadata
Any metadata supported by the underlying remote is read and written.
See the [metadata](/docs/#metadata) docs for more info.
{{< rem autogenerated options stop >}}

View file

@ -42,7 +42,7 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files.
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files.

View file

@ -16,6 +16,10 @@ Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files that don't
match. It doesn't alter the source or destination.
For the [crypt](/crypt/) remote there is a dedicated command,
[cryptcheck](/commands/rclone_cryptcheck/), that are able to check
the checksums of the crypted files.
If you supply the `--size-only` flag, it will only compare the sizes not
the hashes as well. Use this for a quick check.

View file

@ -1,17 +1,16 @@
---
title: "rclone completion"
description: "generate the autocompletion script for the specified shell"
description: "Generate the autocompletion script for the specified shell"
slug: rclone_completion
url: /commands/rclone_completion/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/ and as part of making a release run "make commanddocs"
---
# rclone completion
generate the autocompletion script for the specified shell
Generate the autocompletion script for the specified shell
## Synopsis
Generate the autocompletion script for rclone for the specified shell.
See each sub-command's help for details on how to use the generated script.
@ -27,8 +26,8 @@ See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone completion bash](/commands/rclone_completion_bash/) - generate the autocompletion script for bash
* [rclone completion fish](/commands/rclone_completion_fish/) - generate the autocompletion script for fish
* [rclone completion powershell](/commands/rclone_completion_powershell/) - generate the autocompletion script for powershell
* [rclone completion zsh](/commands/rclone_completion_zsh/) - generate the autocompletion script for zsh
* [rclone completion bash](/commands/rclone_completion_bash/) - Generate the autocompletion script for bash
* [rclone completion fish](/commands/rclone_completion_fish/) - Generate the autocompletion script for fish
* [rclone completion powershell](/commands/rclone_completion_powershell/) - Generate the autocompletion script for powershell
* [rclone completion zsh](/commands/rclone_completion_zsh/) - Generate the autocompletion script for zsh

View file

@ -1,33 +1,37 @@
---
title: "rclone completion bash"
description: "generate the autocompletion script for bash"
description: "Generate the autocompletion script for bash"
slug: rclone_completion_bash
url: /commands/rclone_completion_bash/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/bash/ and as part of making a release run "make commanddocs"
---
# rclone completion bash
generate the autocompletion script for bash
Generate the autocompletion script for bash
## Synopsis
Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package manager.
To load completions in your current shell session:
$ source <(rclone completion bash)
source <(rclone completion bash)
To load completions for every new session, execute once:
Linux:
$ rclone completion bash > /etc/bash_completion.d/rclone
MacOS:
$ rclone completion bash > /usr/local/etc/bash_completion.d/rclone
### Linux:
rclone completion bash > /etc/bash_completion.d/rclone
### macOS:
rclone completion bash > /usr/local/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
```
rclone completion bash
@ -44,5 +48,5 @@ See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

View file

@ -1,24 +1,25 @@
---
title: "rclone completion fish"
description: "generate the autocompletion script for fish"
description: "Generate the autocompletion script for fish"
slug: rclone_completion_fish
url: /commands/rclone_completion_fish/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/fish/ and as part of making a release run "make commanddocs"
---
# rclone completion fish
generate the autocompletion script for fish
Generate the autocompletion script for fish
## Synopsis
Generate the autocompletion script for the fish shell.
To load completions in your current shell session:
$ rclone completion fish | source
rclone completion fish | source
To load completions for every new session, execute once:
$ rclone completion fish > ~/.config/fish/completions/rclone.fish
rclone completion fish > ~/.config/fish/completions/rclone.fish
You will need to start a new shell for this setup to take effect.
@ -38,5 +39,5 @@ See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

View file

@ -1,21 +1,21 @@
---
title: "rclone completion powershell"
description: "generate the autocompletion script for powershell"
description: "Generate the autocompletion script for powershell"
slug: rclone_completion_powershell
url: /commands/rclone_completion_powershell/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/powershell/ and as part of making a release run "make commanddocs"
---
# rclone completion powershell
generate the autocompletion script for powershell
Generate the autocompletion script for powershell
## Synopsis
Generate the autocompletion script for powershell.
To load completions in your current shell session:
PS C:\> rclone completion powershell | Out-String | Invoke-Expression
rclone completion powershell | Out-String | Invoke-Expression
To load completions for every new session, add the output of the above command
to your powershell profile.
@ -36,5 +36,5 @@ See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

View file

@ -1,29 +1,32 @@
---
title: "rclone completion zsh"
description: "generate the autocompletion script for zsh"
description: "Generate the autocompletion script for zsh"
slug: rclone_completion_zsh
url: /commands/rclone_completion_zsh/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/zsh/ and as part of making a release run "make commanddocs"
---
# rclone completion zsh
generate the autocompletion script for zsh
Generate the autocompletion script for zsh
## Synopsis
Generate the autocompletion script for the zsh shell.
If shell completion is not already enabled in your environment you will need
to enable it. You can execute the following once:
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for every new session, execute once:
# Linux:
$ rclone completion zsh > "${fpath[1]}/_rclone"
# macOS:
$ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
### Linux:
rclone completion zsh > "${fpath[1]}/_rclone"
### macOS:
rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
You will need to start a new shell for this setup to take effect.
@ -43,5 +46,5 @@ See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

View file

@ -14,13 +14,18 @@ Copy files from source to dest, skipping identical files.
Copy the source to the destination. Does not transfer files that are
identical on source and destination, testing by size and modification
time or MD5SUM. Doesn't delete files from the destination.
time or MD5SUM. Doesn't delete files from the destination. If you
want to also delete files from destination, to make it match source,
use the [sync](/commands/rclone_sync/) command instead.
Note that it is always the contents of the directory that is synced,
not the directory so when source:path is a directory, it's the
not the directory itself. So when source:path is a directory, it's the
contents of source:path that are copied, not the directory name and
contents.
To copy single files, use the [copyto](/commands/rclone_copyto/)
command instead.
If dest:path doesn't exist, it is created and the source:path contents
go there.

View file

@ -16,8 +16,8 @@ If source:path is a file or directory then it copies it to a file or
directory named dest:path.
This can be used to upload single files to other than their current
name. If the source is a directory then it acts exactly like the copy
command.
name. If the source is a directory then it acts exactly like the
[copy](/commands/rclone_copy/) command.
So

View file

@ -15,10 +15,11 @@ Copy url content to dest.
Download a URL's content and copy it to the destination without saving
it in temporary storage.
Setting `--auto-filename` will cause the file name to be retrieved from
the URL (after any redirections) and used in the destination
path. With `--print-filename` in addition, the resulting file name will
be printed.
Setting `--auto-filename` will attempt to automatically determine the filename from the URL
(after any redirections) and used in the destination path.
With `--auto-filename-header` in
addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL.
With `--print-filename` in addition, the resulting file name will be printed.
Setting `--no-clobber` will prevent overwriting file on the
destination if there is one with the same name.
@ -34,11 +35,12 @@ rclone copyurl https://example.com dest:path [flags]
## Options
```
-a, --auto-filename Get the file name from the URL and use it for destination file path
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
-a, --auto-filename Get the file name from the URL and use it for destination file path
--header-filename Get the file name from the Content-Disposition header
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
```
See the [global flags page](/flags/) for global options not listed here.

View file

@ -12,9 +12,9 @@ Cryptcheck checks the integrity of a crypted remote.
## Synopsis
rclone cryptcheck checks a remote against a crypted remote. This is
the equivalent of running rclone check, but able to check the
checksums of the crypted remote.
rclone cryptcheck checks a remote against a [crypted](/crypt/) remote.
This is the equivalent of running rclone [check](/commands/rclone_check/),
but able to check the checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support
some kind of checksum.

View file

@ -15,7 +15,7 @@ Cryptdecode returns unencrypted file names.
rclone cryptdecode returns unencrypted file names when provided with
a list of encrypted file names. List limit is 10 items.
If you supply the --reverse flag, it will return encrypted file names.
If you supply the `--reverse` flag, it will return encrypted file names.
use it like this
@ -23,8 +23,8 @@ use it like this
rclone cryptdecode --reverse encryptedremote: filename1 filename2
Another way to accomplish this is by using the `rclone backend encode` (or `decode`)command.
See the documentation on the `crypt` overlay for more info.
Another way to accomplish this is by using the `rclone backend encode` (or `decode`) command.
See the documentation on the [crypt](/crypt/) overlay for more info.
```

View file

@ -22,7 +22,7 @@ Opendrive) that can have duplicate file names. It can be run on wrapping backend
(e.g. crypt) if they wrap a backend which supports duplicate file
names.
However if --by-hash is passed in then dedupe will find files with
However if `--by-hash` is passed in then dedupe will find files with
duplicate hashes instead which will work on any backend which supports
at least one hash. This can be used to find files with duplicate
content. This is known as deduping by hash.

View file

@ -12,16 +12,16 @@ Remove the files in path.
## Synopsis
Remove the files in path. Unlike `purge` it obeys include/exclude
filters so can be used to selectively delete files.
Remove the files in path. Unlike [purge](/commands/rclone_purge/) it
obeys include/exclude filters so can be used to selectively delete files.
`rclone delete` only deletes files but leaves the directory structure
alone. If you want to delete a directory and all of its contents use
the `purge` command.
the [purge](/commands/rclone_purge/) command.
If you supply the `--rmdirs` flag, it will remove all empty directories along with it.
You can also use the separate command `rmdir` or `rmdirs` to
delete empty directories only.
You can also use the separate command [rmdir](/commands/rclone_rmdir/) or
[rmdirs](/commands/rclone_rmdirs/) to delete empty directories only.
For example, to delete all files bigger than 100 MiB, you may first want to
check what would be deleted (use either):

View file

@ -13,7 +13,7 @@ Output completion script for a given shell.
Generates a shell completion script for rclone.
Run with --help to list the supported shells.
Run with `--help` to list the supported shells.
## Options

View file

@ -21,6 +21,9 @@ not supported by the remote, no hash will be returned. With the
download flag, the file will be downloaded from the remote and
hashed locally enabling any hash for any remote.
For the MD5 and SHA1 algorithms there are also dedicated commands,
[md5sum](/commands/rclone_md5sum/) and [sha1sum](/commands/rclone_sha1sum/).
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
when there is data to read (if not, the hypen will be treated literaly,
@ -36,6 +39,7 @@ Run without a hash to see the list of all supported hashes, e.g.
* crc32
* sha256
* dropbox
* hidrive
* mailru
* quickxor

View file

@ -14,7 +14,7 @@ List all the remotes in the config file.
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
When used with the `--long` flag it lists the types too.
```

View file

@ -13,7 +13,7 @@ List all directories/containers/buckets in the path.
Lists the directories in the source path to standard output. Does not
recurse by default. Use the -R flag to recurse.
recurse by default. Use the `-R` flag to recurse.
This command lists the total size of the directory (if known, -1 if
not), the modification time (if known, the current time if not), the
@ -31,7 +31,7 @@ Or
-1 2017-01-03 14:40:54 -1 2500files
-1 2017-07-08 14:39:28 -1 4000files
If you just want the directory names use "rclone lsf --dirs-only".
If you just want the directory names use `rclone lsf --dirs-only`.
Any of the filtering options can be applied to this command.

View file

@ -26,7 +26,7 @@ Eg
ferejej3gux/
fubuwic
Use the --format option to control what gets listed. By default this
Use the `--format` option to control what gets listed. By default this
is just the path, but you can use these parameters to control the
output:
@ -39,9 +39,10 @@ output:
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, e.g. "Hot" or "Cool"
M - Metadata of object in JSON blob format, eg {"key":"value"}
So if you wanted the path, size and modification time, you would use
--format "pst", or maybe --format "tsp" to put the path last.
`--format "pst"`, or maybe `--format "tsp"` to put the path last.
Eg
@ -53,7 +54,7 @@ Eg
2016-06-25 18:55:40;37600;fubuwic
If you specify "h" in the format you will get the MD5 hash by default,
use the "--hash" flag to change which hash you want. Note that this
use the `--hash` flag to change which hash you want. Note that this
can be returned as an empty string if it isn't available on the object
(and for directories), "ERROR" if there was an error reading it from
the object and "UNSUPPORTED" if that object does not support that hash
@ -75,7 +76,7 @@ Eg
(Though "rclone md5sum ." is an easier way of typing this.)
By default the separator is ";" this can be changed with the
--separator flag. Note that separators aren't escaped in the path so
`--separator` flag. Note that separators aren't escaped in the path so
putting it last is a good strategy.
Eg
@ -97,8 +98,8 @@ Eg
test.sh,449
"this file contains a comma, in the file name.txt",6
Note that the --absolute parameter is useful for making lists of files
to pass to an rclone copy with the --files-from-raw flag.
Note that the `--absolute` parameter is useful for making lists of files
to pass to an rclone copy with the `--files-from-raw` flag.
For example, to find all the files modified within one day and copy
those only (without traversing the whole directory structure):

View file

@ -15,7 +15,7 @@ List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
{
{
"Hashes" : {
"SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
"MD5" : "b1946ac92492d2347c6235b4d2611184",
@ -33,29 +33,32 @@ The output is an array of Items, where each Item looks like this
"Path" : "full/path/goes/here/file.txt",
"Size" : 6,
"Tier" : "hot",
}
}
If --hash is not specified the Hashes property won't be emitted. The
types of hash can be specified with the --hash-type parameter (which
may be repeated). If --hash-type is set then it implies --hash.
If `--hash` is not specified the Hashes property won't be emitted. The
types of hash can be specified with the `--hash-type` parameter (which
may be repeated). If `--hash-type` is set then it implies `--hash`.
If --no-modtime is specified then ModTime will be blank. This can
If `--no-modtime` is specified then ModTime will be blank. This can
speed things up on remotes where reading the ModTime takes an extra
request (e.g. s3, swift).
If --no-mimetype is specified then MimeType will be blank. This can
If `--no-mimetype` is specified then MimeType will be blank. This can
speed things up on remotes where reading the MimeType takes an extra
request (e.g. s3, swift).
If --encrypted is not specified the Encrypted won't be emitted.
If `--encrypted` is not specified the Encrypted won't be emitted.
If --dirs-only is not specified files in addition to directories are
If `--dirs-only` is not specified files in addition to directories are
returned
If --files-only is not specified directories in addition to the files
If `--files-only` is not specified directories in addition to the files
will be returned.
if --stat is set then a single JSON blob will be returned about the
If `--metadata` is set then an additional Metadata key will be returned.
This will have metdata in rclone standard format as a JSON object.
if `--stat` is set then a single JSON blob will be returned about the
item pointed to. This will return an error if the item isn't found.
However on bucket based backends (like s3, gcs, b2, azureblob etc) if
the item isn't found it will return an empty directory as it isn't
@ -64,7 +67,7 @@ possible to tell empty directories from missing directories there.
The Path field will only show folders below the remote path being listed.
If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
When used without --recursive the Path will always be the same as Name.
When used without `--recursive` the Path will always be the same as Name.
If the directory is a bucket in a bucket-based backend, then
"IsBucket" will be set to true. This key won't be present unless it is
@ -112,7 +115,7 @@ rclone lsjson remote:path [flags]
```
--dirs-only Show only directories in the listing
-M, --encrypted Show the encrypted names
--encrypted Show the encrypted names
--files-only Show only files in the listing
--hash Include hashes in the output (may take longer)
--hash-type stringArray Show only this hash type (may be repeated)

View file

@ -20,6 +20,10 @@ not supported by the remote, no hash will be returned. With the
download flag, the file will be downloaded from the remote and
hashed locally enabling MD5 for any remote.
For other algorithms, see the [hashsum](/commands/rclone_hashsum/)
command. Running `rclone md5sum remote:path` is equivalent
to running `rclone hashsum MD5 remote:path`.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
when there is data to read (if not, the hypen will be treated literaly,

View file

@ -75,10 +75,10 @@ at all, then 1 PiB is set as both the total and the free size.
To run rclone mount on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
systems for Windows. It provides a FUSE emulation layer which rclone
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
uses combination with [cgofuse](https://github.com/winfsp/cgofuse).
Both of these packages are by Bill Zissimopoulos who was very helpful
during the implementation of rclone mount for Windows.
@ -228,7 +228,7 @@ from Microsoft's Sysinternals suite, which has option `-s` to start
processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [`--config`](https://rclone.org/docs/#config-config-file) option.
@ -410,7 +410,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -567,6 +567,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
### Fingerprinting
Various parts of the VFS use fingerprinting to see if a local file
copy has changed relative to a remote file. Fingerprints are made
from:
- size
- modification time
- hash
where available on an object.
On some backends some of these attributes are slow to read (they take
an extra API call per object, or extra work per object).
For example `hash` is slow with the `local` and `sftp` backends as
they have to read the entire file and hash it, and `modtime` is slow
with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
need to do an extra API call to fetch it.
If you use the `--vfs-fast-fingerprint` flag then rclone will not
include the slow operations in the fingerprint. This makes the
fingerprinting less accurate but much faster and will improve the
opening time of cached files.
If you are running a vfs cache over `local`, `s3` or `swift` backends
then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of
the files in the cache may be invalidated and the files will need to
be downloaded again.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
@ -607,7 +639,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -619,7 +651,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -636,28 +668,35 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## VFS Disk Options
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
@ -705,7 +744,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
-o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
@ -713,6 +752,8 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)

View file

@ -16,6 +16,9 @@ Moves the contents of the source directory to the destination
directory. Rclone will error if the source and destination overlap and
the remote does not support a server-side directory move operation.
To move single files, use the [moveto](/commands/rclone_moveto/)
command instead.
If no filters are in use and if possible this will server-side move
`source:path` into `dest:path`. After this `source:path` will no
longer exist.
@ -26,7 +29,8 @@ move will be used, otherwise it will copy it (server-side if possible)
into `dest:path` then delete the original (if no errors on copy) in
`source:path`.
If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
If you want to delete empty source directories after move, use the
`--delete-empty-src-dirs` flag.
See the [--no-traverse](/docs/#no-traverse) option for controlling
whether rclone lists the destination directory or not. Supplying this

View file

@ -17,7 +17,7 @@ directory named dest:path.
This can be used to rename files or upload single files to other than
their existing name. If the source is a directory then it acts exactly
like the move command.
like the [move](/commands/rclone_move/) command.
So

View file

@ -23,7 +23,8 @@ builds an in memory representation. rclone ncdu can be used during
this scanning phase and you will see it building up the directory
structure as it goes along.
Here are the keys - press '?' to toggle the help on and off
You can interact with the user interface using key presses,
press '?' to toggle the help on and off. The supported keys are:
↑,↓ or k,j to Move
→,l to enter
@ -34,19 +35,41 @@ Here are the keys - press '?' to toggle the help on and off
u toggle human-readable format
n,s,C,A sort by name,size,count,average size
d delete file/directory
v select file/directory
V enter visual select mode
D delete selected files/directories
y copy current path to clipboard
Y display current path
^L refresh screen
^L refresh screen (fix screen corruption)
? to toggle help on and off
q/ESC/c-C to quit
q/ESC/^c to quit
Listed files/directories may be prefixed by a one-character flag,
some of them combined with a description in brackes at end of line.
These flags have the following meaning:
e means this is an empty directory, i.e. contains no files (but
may contain empty subdirectories)
~ means this is a directory where some of the files (possibly in
subdirectories) have unknown size, and therefore the directory
size may be underestimated (and average size inaccurate, as it
is average of the files with known sizes).
. means an error occurred while reading a subdirectory, and
therefore the directory size may be underestimated (and average
size inaccurate)
! means an error occurred while reading this directory
This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for
rclone remotes. It is missing lots of features at the moment
but is useful as it stands.
Note that it might take some time to delete big files/folders. The
Note that it might take some time to delete big files/directories. The
UI won't respond in the meantime since the deletion is done synchronously.
For a non-interactive listing of the remote, see the
[tree](/commands/rclone_tree/) command. To just get the total size of
the remote you can also use the [size](/commands/rclone_size/) command.
```
rclone ncdu remote:path [flags]

View file

@ -26,7 +26,7 @@ This command can also accept a password through STDIN instead of an
argument by passing a hyphen as an argument. This will use the first
line of STDIN as the password not including the trailing newline.
echo "secretpassword" | rclone obscure -
echo "secretpassword" | rclone obscure -
If there is no data on STDIN to read, rclone obscure will default to
obfuscating the hyphen itself.

View file

@ -13,9 +13,10 @@ Remove the path and all of its contents.
Remove the path and all of its contents. Note that this does not obey
include/exclude filters - everything will be removed. Use the `delete`
command if you want to selectively delete files. To delete empty directories only,
use command `rmdir` or `rmdirs`.
include/exclude filters - everything will be removed. Use the
[delete](/commands/rclone_delete/) command if you want to selectively
delete files. To delete empty directories only, use command
[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/).
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.

View file

@ -13,26 +13,26 @@ Run a command against a running rclone.
This runs a command against a running rclone. Use the --url flag to
This runs a command against a running rclone. Use the `--url` flag to
specify an non default URL to connect on. This can be either a
":port" which is taken to mean "http://localhost:port" or a
"host:port" which is taken to mean "http://host:port"
A username and password can be passed in with --user and --pass.
A username and password can be passed in with `--user` and `--pass`.
Note that --rc-addr, --rc-user, --rc-pass will be read also for --url,
--user, --pass.
Note that `--rc-addr`, `--rc-user`, `--rc-pass` will be read also for
`--url`, `--user`, `--pass`.
Arguments should be passed in as parameter=value.
The result will be returned as a JSON object by default.
The --json parameter can be used to pass in a JSON blob as an input
The `--json` parameter can be used to pass in a JSON blob as an input
instead of key=value arguments. This is the only way of passing in
more complicated values.
The -o/--opt option can be used to set a key "opt" with key, value
options in the form "-o key=value" or "-o key". It can be repeated as
The `-o`/`--opt` option can be used to set a key "opt" with key, value
options in the form `-o key=value` or `-o key`. It can be repeated as
many times as required. This is useful for rc commands which take the
"opt" parameter which by convention is a dictionary of strings.
@ -43,7 +43,7 @@ Will place this in the "opt" value
{"key":"value", "key2","")
The -a/--arg option can be used to set strings in the "arg" value. It
The `-a`/`--arg` option can be used to set strings in the "arg" value. It
can be repeated as many times as required. This is useful for rc
commands which take the "arg" parameter which by convention is a list
of strings.
@ -54,13 +54,13 @@ Will place this in the "arg" value
["value", "value2"]
Use --loopback to connect to the rclone instance running "rclone rc".
Use `--loopback` to connect to the rclone instance running `rclone rc`.
This is very useful for testing commands without having to run an
rclone rc server, e.g.:
rclone rc --loopback operations/about fs=/
Use "rclone rc" to see a list of all possible commands.
Use `rclone rc` to see a list of all possible commands.
```
rclone rc commands parameter [flags]

View file

@ -30,11 +30,11 @@ must fit into RAM. The cutoff needs to be small enough to adhere
the limits of your remote, please see there. Generally speaking,
setting this cutoff too high will decrease your performance.
Use the |--size| flag to preallocate the file in advance at the remote end
Use the `--size` flag to preallocate the file in advance at the remote end
and actually stream it, even if remote backend doesn't support streaming.
|--size| should be the exact size of the input stream in bytes. If the
size of the stream is different in length to the |--size| passed in
`--size` should be the exact size of the input stream in bytes. If the
size of the stream is different in length to the `--size` passed in
then the transfer will likely fail.
Note that the upload can also not be retried because the data is

View file

@ -14,10 +14,10 @@ Remove the empty directory at path.
This removes empty directory given by path. Will not remove the path if it
has any objects in it, not even empty subdirectories. Use
command `rmdirs` (or `delete` with option `--rmdirs`)
to do that.
command [rmdirs](/commands/rclone_rmdirs/) (or [delete](/commands/rclone_delete/)
with option `--rmdirs`) to do that.
To delete a path and any objects in it, use `purge` command.
To delete a path and any objects in it, use [purge](/commands/rclone_purge/) command.
```

View file

@ -17,15 +17,16 @@ that only contain empty directories), that it finds under the path.
The root path itself will also be removed if it is empty, unless
you supply the `--leave-root` flag.
Use command `rmdir` to delete just the empty directory
given by path, not recurse.
Use command [rmdir](/commands/rclone_rmdir/) to delete just the empty
directory given by path, not recurse.
This is useful for tidying up remotes that rclone has left a lot of
empty directories in. For example the `delete` command will
delete files but leave the directory structure (unless used with
option `--rmdirs`).
empty directories in. For example the [delete](/commands/rclone_delete/)
command will delete files but leave the directory structure (unless
used with option `--rmdirs`).
To delete a path and any objects in it, use `purge` command.
To delete a path and any objects in it, use [purge](/commands/rclone_purge/)
command.
```

View file

@ -11,8 +11,8 @@ Serve a remote over a protocol.
## Synopsis
rclone serve is used to serve a remote over a given protocol. This
command requires the use of a subcommand to specify the protocol, e.g.
Serve a remote over a given protocol. Requires the use of a
subcommand to specify the protocol, e.g.
rclone serve http remote:
@ -40,5 +40,5 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve sftp](/commands/rclone_serve_sftp/) - Serve the remote over SFTP.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV.

View file

@ -11,14 +11,16 @@ Serve remote:path over DLNA
## Synopsis
rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many
devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN
and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast
packets (SSDP) and will thus only work on LANs.
Run a DLNA media server for media stored in an rclone remote. Many
devices, such as the Xbox and PlayStation, can automatically discover
this server in the LAN and play audio/video from it. VLC is also
supported. Service discovery uses UDP multicast packets (SSDP) and
will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or
file extensions. Additionally, there is no media transcoding support. This means that some
players might show files that they are not able to play back correctly.
Rclone will list all files present in the remote, without filtering
based on media formats or file extensions. Additionally, there is no
media transcoding support. This means that some players might show
files that they are not able to play back correctly.
## Server options
@ -51,7 +53,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -208,6 +210,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
### Fingerprinting
Various parts of the VFS use fingerprinting to see if a local file
copy has changed relative to a remote file. Fingerprints are made
from:
- size
- modification time
- hash
where available on an object.
On some backends some of these attributes are slow to read (they take
an extra API call per object, or extra work per object).
For example `hash` is slow with the `local` and `sftp` backends as
they have to read the entire file and hash it, and `modtime` is slow
with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
need to do an extra API call to fetch it.
If you use the `--vfs-fast-fingerprint` flag then rclone will not
include the slow operations in the fingerprint. This makes the
fingerprinting less accurate but much faster and will improve the
opening time of cached files.
If you are running a vfs cache over `local`, `s3` or `swift` backends
then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of
the files in the cache may be invalidated and the files will need to
be downloaded again.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
@ -248,7 +282,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -260,7 +294,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -277,28 +311,35 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## VFS Disk Options
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
@ -332,7 +373,7 @@ rclone serve dlna remote:path [flags]
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
@ -340,6 +381,8 @@ rclone serve dlna remote:path [flags]
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)

View file

@ -69,7 +69,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -226,6 +226,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
### Fingerprinting
Various parts of the VFS use fingerprinting to see if a local file
copy has changed relative to a remote file. Fingerprints are made
from:
- size
- modification time
- hash
where available on an object.
On some backends some of these attributes are slow to read (they take
an extra API call per object, or extra work per object).
For example `hash` is slow with the `local` and `sftp` backends as
they have to read the entire file and hash it, and `modtime` is slow
with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
need to do an extra API call to fetch it.
If you use the `--vfs-fast-fingerprint` flag then rclone will not
include the slow operations in the fingerprint. This makes the
fingerprinting less accurate but much faster and will improve the
opening time of cached files.
If you are running a vfs cache over `local`, `s3` or `swift` backends
then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of
the files in the cache may be invalidated and the files will need to
be downloaded again.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
@ -266,7 +298,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -278,7 +310,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -295,28 +327,35 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## VFS Disk Options
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
@ -367,7 +406,7 @@ rclone serve docker [flags]
--noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
-o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--socket-addr string Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
@ -377,6 +416,8 @@ rclone serve docker [flags]
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)

View file

@ -12,9 +12,9 @@ Serve remote:path over FTP.
## Synopsis
rclone serve ftp implements a basic ftp server to serve the
remote over FTP protocol. This can be viewed with a ftp client
or you can make a remote of type ftp to read and write it.
Run a basic FTP server to serve a remote over FTP protocol.
This can be viewed with a FTP client or you can make a remote of
type FTP to read and write it.
## Server options
@ -50,7 +50,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -207,6 +207,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
### Fingerprinting
Various parts of the VFS use fingerprinting to see if a local file
copy has changed relative to a remote file. Fingerprints are made
from:
- size
- modification time
- hash
where available on an object.
On some backends some of these attributes are slow to read (they take
an extra API call per object, or extra work per object).
For example `hash` is slow with the `local` and `sftp` backends as
they have to read the entire file and hash it, and `modtime` is slow
with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
need to do an extra API call to fetch it.
If you use the `--vfs-fast-fingerprint` flag then rclone will not
include the slow operations in the fingerprint. This makes the
fingerprinting less accurate but much faster and will improve the
opening time of cached files.
If you are running a vfs cache over `local`, `s3` or `swift` backends
then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of
the files in the cache may be invalidated and the files will need to
be downloaded again.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
@ -247,7 +279,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -259,7 +291,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -276,28 +308,35 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## VFS Disk Options
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
@ -416,7 +455,7 @@ rclone serve ftp remote:path [flags]
--passive-port string Passive port range to use (default "30000-32000")
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--public-ip string Public IP address to advertise for passive connections
--read-only Mount read-only
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication (default "anonymous")
@ -425,6 +464,8 @@ rclone serve ftp remote:path [flags]
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)

View file

@ -11,59 +11,59 @@ Serve the remote over HTTP.
## Synopsis
rclone serve http implements a basic web server to serve the remote
over HTTP. This can be viewed in a web browser or you can make a
remote of type http read from it.
Run a basic web server to serve a remote over HTTP.
This can be viewed in a web browser or you can make a remote of type
http read from it.
You can use the filter flags (e.g. --include, --exclude) to control what
You can use the filter flags (e.g. `--include`, `--exclude`) to control what
is served.
The server will log errors. Use -v to see access logs.
The server will log errors. Use `-v` to see access logs.
--bwlimit will be respected for file transfers. Use --stats to
`--bwlimit` will be respected for file transfers. Use `--stats` to
control the stats printing.
## Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
Use `--addr` to specify which IP address and port the server should
listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
If you set --addr to listen on a public or LAN accessible IP address
If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
--server-read-timeout and --server-write-timeout can be used to
`--server-read-timeout` and `--server-write-timeout` can be used to
control the timeouts on the server. Note that this is the total time
for a transfer.
--max-header-bytes controls the maximum number of bytes the server will
`--max-header-bytes` controls the maximum number of bytes the server will
accept in the HTTP header.
--baseurl controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used --baseurl "/rclone" then
`--baseurl` controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used `--baseurl "/rclone"` then
rclone would serve from a URL starting with "/rclone/". This is
useful if you wish to proxy rclone serve. Rclone automatically
inserts leading and trailing "/" on --baseurl, so --baseurl "rclone",
--baseurl "/rclone" and --baseurl "/rclone/" are all treated
inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`,
`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated
identically.
### SSL/TLS
By default this will serve over http. If you want you can serve over
https. You will need to supply the --cert and --key flags. If you
wish to do client side certificate validation then you will need to
supply --client-ca also.
https. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
--cert should be a either a PEM encoded certificate or a concatenation
of that with the CA certificate. --key should be the PEM encoded
private key and --client-ca should be the PEM encoded client
`--cert` should be a either a PEM encoded certificate or a concatenation
of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
### Template
--template allows a user to specify a custom markup template for http
and webdav serve functions. The server exports the following markup
`--template` allows a user to specify a custom markup template for HTTP
and WebDAV serve functions. The server exports the following markup
to be used within the template to server pages:
| Parameter | Description |
@ -90,9 +90,9 @@ to be used within the template to server pages:
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the --user and --pass flags.
set a single username and password with the `--user` and `--pass` flags.
Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
in standard apache format and supports MD5, SHA1 and BCrypt for basic
authentication. Bcrypt is recommended.
@ -104,9 +104,9 @@ To create an htpasswd file:
The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
Use `--realm` to set the authentication realm.
Use --salt to change the password hashing salt from the default.
Use `--salt` to change the password hashing salt from the default.
## VFS - Virtual File System
@ -126,7 +126,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -283,6 +283,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
### Fingerprinting
Various parts of the VFS use fingerprinting to see if a local file
copy has changed relative to a remote file. Fingerprints are made
from:
- size
- modification time
- hash
where available on an object.
On some backends some of these attributes are slow to read (they take
an extra API call per object, or extra work per object).
For example `hash` is slow with the `local` and `sftp` backends as
they have to read the entire file and hash it, and `modtime` is slow
with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
need to do an extra API call to fetch it.
If you use the `--vfs-fast-fingerprint` flag then rclone will not
include the slow operations in the fingerprint. This makes the
fingerprinting less accurate but much faster and will improve the
opening time of cached files.
If you are running a vfs cache over `local`, `s3` or `swift` backends
then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of
the files in the cache may be invalidated and the files will need to
be downloaded again.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
@ -323,7 +355,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -335,7 +367,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -352,28 +384,35 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## VFS Disk Options
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
@ -412,7 +451,7 @@ rclone serve http remote:path [flags]
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--realm string Realm for authentication
--salt string Password hashing salt (default "dlPL2MqE")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
@ -426,6 +465,8 @@ rclone serve http remote:path [flags]
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)

View file

@ -11,8 +11,8 @@ Serve the remote for restic's REST API.
## Synopsis
rclone serve restic implements restic's REST backend API
over HTTP. This allows restic to use rclone as a data storage
Run a basic web server to serve a remove over restic's REST backend
API over HTTP. This allows restic to use rclone as a data storage
mechanism for cloud providers that restic does not support directly.
[Restic](https://restic.net/) is a command-line program for doing
@ -20,8 +20,8 @@ backups.
The server will log errors. Use -v to see access logs.
--bwlimit will be respected for file transfers. Use --stats to
control the stats printing.
`--bwlimit` will be respected for file transfers.
Use `--stats` to control the stats printing.
## Setting up rclone for use by restic ###
@ -40,11 +40,11 @@ Where you can replace "backup" in the above by whatever path in the
remote you wish to use.
By default this will serve on "localhost:8080" you can change this
with use of the "--addr" flag.
with use of the `--addr` flag.
You might wish to start this server on boot.
Adding --cache-objects=false will cause rclone to stop caching objects
Adding `--cache-objects=false` will cause rclone to stop caching objects
returned from the List call. Caching is normally desirable as it speeds
up downloading objects, saves transactions and uses very little memory.
@ -90,36 +90,36 @@ these **must** end with /. Eg
### Private repositories ####
The "--private-repos" flag can be used to limit users to repositories starting
The`--private-repos` flag can be used to limit users to repositories starting
with a path of `/<username>/`.
## Server options
Use --addr to specify which IP address and port the server should
listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
Use `--addr` to specify which IP address and port the server should
listen on, e.g. `--addr 1.2.3.4:8000` or `--addr :8080` to
listen to all IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
If you set --addr to listen on a public or LAN accessible IP address
If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
--server-read-timeout and --server-write-timeout can be used to
`--server-read-timeout` and `--server-write-timeout` can be used to
control the timeouts on the server. Note that this is the total time
for a transfer.
--max-header-bytes controls the maximum number of bytes the server will
`--max-header-bytes` controls the maximum number of bytes the server will
accept in the HTTP header.
--baseurl controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used --baseurl "/rclone" then
`--baseurl` controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used `--baseurl "/rclone"` then
rclone would serve from a URL starting with "/rclone/". This is
useful if you wish to proxy rclone serve. Rclone automatically
inserts leading and trailing "/" on --baseurl, so --baseurl "rclone",
--baseurl "/rclone" and --baseurl "/rclone/" are all treated
inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`,
`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated
identically.
--template allows a user to specify a custom markup template for http
and webdav serve functions. The server exports the following markup
`--template` allows a user to specify a custom markup template for HTTP
and WebDAV serve functions. The server exports the following markup
to be used within the template to server pages:
| Parameter | Description |
@ -146,9 +146,9 @@ to be used within the template to server pages:
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the --user and --pass flags.
set a single username and password with the `--user` and `--pass` flags.
Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
in standard apache format and supports MD5, SHA1 and BCrypt for basic
authentication. Bcrypt is recommended.
@ -160,18 +160,18 @@ To create an htpasswd file:
The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
Use `--realm` to set the authentication realm.
### SSL/TLS
By default this will serve over http. If you want you can serve over
https. You will need to supply the --cert and --key flags. If you
wish to do client side certificate validation then you will need to
supply --client-ca also.
By default this will serve over HTTP. If you want you can serve over
HTTPS. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
--cert should be either a PEM encoded certificate or a concatenation
of that with the CA certificate. --key should be the PEM encoded
private key and --client-ca should be the PEM encoded client
`--cert` should be either a PEM encoded certificate or a concatenation
of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.

View file

@ -11,21 +11,21 @@ Serve the remote over SFTP.
## Synopsis
rclone serve sftp implements an SFTP server to serve the remote
over SFTP. This can be used with an SFTP client or you can make a
remote of type sftp to use with it.
Run a SFTP server to serve a remote over SFTP. This can be used
with an SFTP client or you can make a remote of type sftp to use with it.
You can use the filter flags (e.g. --include, --exclude) to control what
You can use the filter flags (e.g. `--include`, `--exclude`) to control what
is served.
The server will log errors. Use -v to see access logs.
The server will log errors. Use `-v` to see access logs.
--bwlimit will be respected for file transfers. Use --stats to
control the stats printing.
`--bwlimit` will be respected for file transfers.
Use `--stats` to control the stats printing.
You must provide some means of authentication, either with --user/--pass,
an authorized keys file (specify location with --authorized-keys - the
default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no
You must provide some means of authentication, either with
`--user`/`--pass`, an authorized keys file (specify location with
`--authorized-keys` - the default is the same as ssh), an
`--auth-proxy`, or set the `--no-auth` flag for no
authentication when logging in.
Note that this also implements a small number of shell commands so
@ -33,30 +33,30 @@ that it can provide md5sum/sha1sum/df information for the rclone sftp
backend. This means that is can support SHA1SUMs, MD5SUMs and the
about command when paired with the rclone sftp backend.
If you don't supply a host --key then rclone will generate rsa, ecdsa
If you don't supply a host `--key` then rclone will generate rsa, ecdsa
and ed25519 variants, and cache them for later use in rclone's cache
directory (see "rclone help flags cache-dir") in the "serve-sftp"
directory (see `rclone help flags cache-dir`) in the "serve-sftp"
directory.
By default the server binds to localhost:2022 - if you want it to be
reachable externally then supply "--addr :2022" for example.
reachable externally then supply `--addr :2022` for example.
Note that the default of "--vfs-cache-mode off" is fine for the rclone
Note that the default of `--vfs-cache-mode off` is fine for the rclone
sftp backend, but it may not be with other SFTP clients.
If --stdio is specified, rclone will serve SFTP over stdio, which can
If `--stdio` is specified, rclone will serve SFTP over stdio, which can
be used with sshd via ~/.ssh/authorized_keys, for example:
restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
On the client you need to set "--transfers 1" when using --stdio.
On the client you need to set `--transfers 1` when using `--stdio`.
Otherwise multiple instances of the rclone server are started by OpenSSH
which can lead to "corrupted on transfer" errors. This is the case because
the client chooses indiscriminately which server to send commands to while
the servers all have different views of the state of the filing system.
The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing
used. Omitting "restrict" and using --sftp-path-override to enable
used. Omitting "restrict" and using `--sftp-path-override` to enable
checksumming is possible but less secure and you could use the SFTP server
provided by OpenSSH in this case.
@ -79,7 +79,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -236,6 +236,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
### Fingerprinting
Various parts of the VFS use fingerprinting to see if a local file
copy has changed relative to a remote file. Fingerprints are made
from:
- size
- modification time
- hash
where available on an object.
On some backends some of these attributes are slow to read (they take
an extra API call per object, or extra work per object).
For example `hash` is slow with the `local` and `sftp` backends as
they have to read the entire file and hash it, and `modtime` is slow
with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
need to do an extra API call to fetch it.
If you use the `--vfs-fast-fingerprint` flag then rclone will not
include the slow operations in the fingerprint. This makes the
fingerprinting less accurate but much faster and will improve the
opening time of cached files.
If you are running a vfs cache over `local`, `s3` or `swift` backends
then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of
the files in the cache may be invalidated and the files will need to
be downloaded again.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
@ -276,7 +308,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -288,7 +320,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -305,28 +337,35 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## VFS Disk Options
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
@ -444,7 +483,7 @@ rclone serve sftp remote:path [flags]
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--stdio Run an sftp server on run stdin/stdout
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
@ -454,6 +493,8 @@ rclone serve sftp remote:path [flags]
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)

View file

@ -1,23 +1,21 @@
---
title: "rclone serve webdav"
description: "Serve remote:path over webdav."
description: "Serve remote:path over WebDAV."
slug: rclone_serve_webdav
url: /commands/rclone_serve_webdav/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/webdav/ and as part of making a release run "make commanddocs"
---
# rclone serve webdav
Serve remote:path over webdav.
Serve remote:path over WebDAV.
## Synopsis
Run a basic WebDAV server to serve a remote over HTTP via the
WebDAV protocol. This can be viewed with a WebDAV client, through a web
browser, or you can make a remote of type WebDAV to read and write it.
rclone serve webdav implements a basic webdav server to serve the
remote over HTTP via the webdav protocol. This can be viewed with a
webdav client, through a web browser, or you can make a remote of
type webdav to read and write it.
## Webdav options
## WebDAV options
### --etag-hash
@ -26,38 +24,37 @@ based on the ModTime and Size of the object.
If this flag is set to "auto" then rclone will choose the first
supported hash on the backend or you can use a named hash such as
"MD5" or "SHA-1".
Use "rclone hashsum" to see the full list.
"MD5" or "SHA-1". Use the [hashsum](/commands/rclone_hashsum/) command
to see the full list.
## Server options
Use --addr to specify which IP address and port the server should
listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
Use `--addr` to specify which IP address and port the server should
listen on, e.g. `--addr 1.2.3.4:8000` or `--addr :8080` to
listen to all IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
If you set --addr to listen on a public or LAN accessible IP address
If you set `--addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
--server-read-timeout and --server-write-timeout can be used to
`--server-read-timeout` and `--server-write-timeout` can be used to
control the timeouts on the server. Note that this is the total time
for a transfer.
--max-header-bytes controls the maximum number of bytes the server will
`--max-header-bytes` controls the maximum number of bytes the server will
accept in the HTTP header.
--baseurl controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used --baseurl "/rclone" then
`--baseurl` controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used `--baseurl "/rclone"` then
rclone would serve from a URL starting with "/rclone/". This is
useful if you wish to proxy rclone serve. Rclone automatically
inserts leading and trailing "/" on --baseurl, so --baseurl "rclone",
--baseurl "/rclone" and --baseurl "/rclone/" are all treated
inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`,
`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated
identically.
--template allows a user to specify a custom markup template for http
and webdav serve functions. The server exports the following markup
`--template` allows a user to specify a custom markup template for HTTP
and WebDAV serve functions. The server exports the following markup
to be used within the template to server pages:
| Parameter | Description |
@ -84,9 +81,9 @@ to be used within the template to server pages:
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the --user and --pass flags.
set a single username and password with the `--user` and `--pass` flags.
Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
in standard apache format and supports MD5, SHA1 and BCrypt for basic
authentication. Bcrypt is recommended.
@ -98,18 +95,18 @@ To create an htpasswd file:
The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
Use `--realm` to set the authentication realm.
### SSL/TLS
By default this will serve over http. If you want you can serve over
https. You will need to supply the --cert and --key flags. If you
wish to do client side certificate validation then you will need to
supply --client-ca also.
By default this will serve over HTTP. If you want you can serve over
HTTPS. You will need to supply the `--cert` and `--key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
--cert should be either a PEM encoded certificate or a concatenation
of that with the CA certificate. --key should be the PEM encoded
private key and --client-ca should be the PEM encoded client
`--cert` should be either a PEM encoded certificate or a concatenation
of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
## VFS - Virtual File System
@ -130,7 +127,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -287,6 +284,38 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
### Fingerprinting
Various parts of the VFS use fingerprinting to see if a local file
copy has changed relative to a remote file. Fingerprints are made
from:
- size
- modification time
- hash
where available on an object.
On some backends some of these attributes are slow to read (they take
an extra API call per object, or extra work per object).
For example `hash` is slow with the `local` and `sftp` backends as
they have to read the entire file and hash it, and `modtime` is slow
with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
need to do an extra API call to fetch it.
If you use the `--vfs-fast-fingerprint` flag then rclone will not
include the slow operations in the fingerprint. This makes the
fingerprinting less accurate but much faster and will improve the
opening time of cached files.
If you are running a vfs cache over `local`, `s3` or `swift` backends
then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of
the files in the cache may be invalidated and the files will need to
be downloaded again.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
@ -327,7 +356,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -339,7 +368,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -356,28 +385,35 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## VFS Disk Options
This flag allows you to manually set the statistics about the filing system.
It can be useful when those statistics cannot be read correctly automatically.
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
@ -500,7 +536,7 @@ rclone serve webdav remote:path [flags]
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--realm string Realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
@ -513,6 +549,8 @@ rclone serve webdav remote:path [flags]
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)

View file

@ -20,6 +20,10 @@ not supported by the remote, no hash will be returned. With the
download flag, the file will be downloaded from the remote and
hashed locally enabling SHA-1 for any remote.
For other algorithms, see the [hashsum](/commands/rclone_hashsum/)
command. Running `rclone sha1sum remote:path` is equivalent
to running `rclone hashsum SHA1 remote:path`.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
when there is data to read (if not, the hypen will be treated literaly,

View file

@ -9,6 +9,28 @@ url: /commands/rclone_size/
Prints the total size and number of objects in remote:path.
## Synopsis
Counts objects in the path and calculates the total size. Prints the
result to standard output.
By default the output is in human-readable format, but shows values in
both human-readable format as well as the raw numbers (global option
`--human-readable` is not considered). Use option `--json`
to format output as JSON instead.
Recurses by default, use `--max-depth 1` to stop the
recursion.
Some backends do not always provide file sizes, see for example
[Google Photos](/googlephotos/#size) and
[Google Drive](/drive/#limitations-of-google-docs).
Rclone will then show a notice in the log indicating how many such
files were encountered, and count them in as empty files in the output
of the size command.
```
rclone size remote:path [flags]
```

View file

@ -16,7 +16,9 @@ Sync the source to the destination, changing the destination
only. Doesn't transfer files that are identical on source and
destination, testing by size and modification time or MD5SUM.
Destination is updated to match source, including deleting files
if necessary (except duplicate objects, see below).
if necessary (except duplicate objects, see below). If you don't
want to delete files from destination, use the
[copy](/commands/rclone_copy/) command instead.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
@ -30,7 +32,7 @@ those providers that support it) are also not yet handled.
It is always the contents of the directory that is synced, not the
directory itself. So when source:path is a directory, it's the contents of
source:path that are copied, not the directory name and contents. See
extended explanation in the `copy` command above if unsure.
extended explanation in the [copy](/commands/rclone_copy/) command if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.

View file

@ -37,6 +37,7 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone test changenotify](/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in.
* [rclone test histogram](/commands/rclone_test_histogram/) - Makes a histogram of file name characters.
* [rclone test info](/commands/rclone_test_info/) - Discovers file name or other limitations for paths.
* [rclone test makefile](/commands/rclone_test_makefile/) - Make files with random contents of the size given
* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory
* [rclone test memory](/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats.

View file

@ -0,0 +1,33 @@
---
title: "rclone test makefile"
description: "Make files with random contents of the size given"
slug: rclone_test_makefile
url: /commands/rclone_test_makefile/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/makefile/ and as part of making a release run "make commanddocs"
---
# rclone test makefile
Make files with random contents of the size given
```
rclone test makefile <size> [<file>]+ [flags]
```
## Options
```
--ascii Fill files with random ASCII printable bytes only
--chargen Fill files with a ASCII chargen pattern
-h, --help help for makefile
--pattern Fill files with a periodic pattern
--seed int Seed for the random number generator (0 for random) (default 1)
--sparse Make the files sparse (appear to be filled with ASCII 0x00)
--zero Fill files with ASCII 0x00
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone test](/commands/rclone_test/) - Run a test command

View file

@ -16,6 +16,8 @@ rclone test makefiles <dir> [flags]
## Options
```
--ascii Fill files with random ASCII printable bytes only
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
-h, --help help for makefiles
@ -23,7 +25,10 @@ rclone test makefiles <dir> [flags]
--max-name-length int Maximum size of file names (default 12)
--min-file-size SizeSuffix Minimum size of file to create
--min-name-length int Minimum size of file names (default 4)
--pattern Fill files with a periodic pattern
--seed int Seed for the random number generator (0 for random) (default 1)
--sparse Make the files sparse (appear to be filled with ASCII 0x00)
--zero Fill files with ASCII 0x00
```
See the [global flags page](/flags/) for global options not listed here.

View file

@ -29,12 +29,16 @@ For example
1 directories, 5 files
You can use any of the filtering options with the tree command (e.g.
--include and --exclude). You can also use --fast-list.
`--include` and `--exclude`. You can also use `--fast-list`.
The tree command has many options for controlling the listing which
are compatible with the tree command. Note that not all of them have
are compatible with the tree command, for example you can include file
sizes with `--size`. Note that not all of them have
short options as they conflict with rclone's short options.
For a more interactive navigation of the remote see the
[ncdu](/commands/rclone_ncdu/) command.
```
rclone tree remote:path [flags]

View file

@ -90,7 +90,7 @@ size of the uncompressed file. The file names should not be changed by anything
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/compress/compress.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to compress (Compress a remote).
Here are the Standard options specific to compress (Compress a remote).
#### --compress-remote
@ -119,7 +119,7 @@ Properties:
### Advanced options
Here are the advanced options specific to compress (Compress a remote).
Here are the Advanced options specific to compress (Compress a remote).
#### --compress-level
@ -156,4 +156,10 @@ Properties:
- Type: SizeSuffix
- Default: 20Mi
### Metadata
Any metadata supported by the underlying remote is read and written.
See the [metadata](/docs/#metadata) docs for more info.
{{< rem autogenerated options stop >}}

View file

@ -419,7 +419,7 @@ check the checksums properly.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/crypt/crypt.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).
#### --crypt-remote
@ -504,7 +504,7 @@ Properties:
### Advanced options
Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).
#### --crypt-server-side-across-configs
@ -584,6 +584,12 @@ Properties:
- Encode using base32768. Suitable if your remote counts UTF-16 or
- Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive)
### Metadata
Any metadata supported by the underlying remote is read and written.
See the [metadata](/docs/#metadata) docs for more info.
## Backend commands
Here are the commands specific to the crypt backend.
@ -594,7 +600,7 @@ Run them with
The help below will explain what arguments each command takes.
See [the "rclone backend" command](/commands/rclone_backend/) for more
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command

View file

@ -548,7 +548,7 @@ Google Documents.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/drive/drive.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to drive (Google Drive).
Here are the Standard options specific to drive (Google Drive).
#### --drive-client-id
@ -603,22 +603,6 @@ Properties:
- Allows read-only access to file metadata but
- does not allow any access to read or download file content.
#### --drive-root-folder-id
ID of the root folder.
Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use
a non root folder as its starting point.
Properties:
- Config: root_folder_id
- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
- Type: string
- Required: false
#### --drive-service-account-file
Service Account Credentials JSON file path.
@ -648,7 +632,7 @@ Properties:
### Advanced options
Here are the advanced options specific to drive (Google Drive).
Here are the Advanced options specific to drive (Google Drive).
#### --drive-token
@ -687,6 +671,22 @@ Properties:
- Type: string
- Required: false
#### --drive-root-folder-id
ID of the root folder.
Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use
a non root folder as its starting point.
Properties:
- Config: root_folder_id
- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
- Type: string
- Required: false
#### --drive-service-account-credentials
Service Account Credentials JSON blob.
@ -1167,6 +1167,34 @@ Properties:
- Type: bool
- Default: false
#### --drive-resource-key
Resource key for accessing a link-shared file.
If you need to access files shared with a link like this
https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing
Then you will need to use the first part "XXX" as the "root_folder_id"
and the second part "YYY" as the "resource_key" otherwise you will get
404 not found errors when trying to access the directory.
See: https://developers.google.com/drive/api/guides/resource-keys
This resource key requirement only applies to a subset of old files.
Note also that opening the folder once in the web interface (with the
user you've authenticated rclone with) seems to be enough so that the
resource key is no needed.
Properties:
- Config: resource_key
- Env Var: RCLONE_DRIVE_RESOURCE_KEY
- Type: string
- Required: false
#### --drive-encoding
The encoding for the backend.
@ -1190,7 +1218,7 @@ Run them with
The help below will explain what arguments each command takes.
See [the "rclone backend" command](/commands/rclone_backend/) for more
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
@ -1292,7 +1320,7 @@ This will return a JSON list of objects like this
With the -o config parameter it will output the list in a format
suitable for adding to a config file to make aliases for all the
drives found.
drives found and a combined drive.
[My Drive]
type = alias
@ -1302,10 +1330,15 @@ drives found.
type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. This may require manual editing
of the names.
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. Any illegal charactes will be
substituted with "_" and duplicate names will have numbers suffixed.
It will also add a remote called AllDrives which shows all the shared
drives combined into one directory tree.
### untrash
@ -1362,6 +1395,18 @@ attempted if possible.
Use the -i flag to see what would be copied before copying.
### exportformats
Dump the export formats for debug purposes
rclone backend exportformats remote: [options] [<arguments>+]
### importformats
Dump the import formats for debug purposes
rclone backend importformats remote: [options] [<arguments>+]
{{< rem autogenerated options stop >}}
## Limitations

View file

@ -182,7 +182,7 @@ finishes up the last batch using this mode.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/dropbox/dropbox.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to dropbox (Dropbox).
Here are the Standard options specific to dropbox (Dropbox).
#### --dropbox-client-id
@ -212,7 +212,7 @@ Properties:
### Advanced options
Here are the advanced options specific to dropbox (Dropbox).
Here are the Advanced options specific to dropbox (Dropbox).
#### --dropbox-token

View file

@ -116,7 +116,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/fichier/fichier.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to fichier (1Fichier).
Here are the Standard options specific to fichier (1Fichier).
#### --fichier-api-key
@ -131,7 +131,7 @@ Properties:
### Advanced options
Here are the advanced options specific to fichier (1Fichier).
Here are the Advanced options specific to fichier (1Fichier).
#### --fichier-shared-folder

View file

@ -154,7 +154,7 @@ The ID for "S3 Storage" would be `120673761`.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/filefabric/filefabric.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to filefabric (Enterprise File Fabric).
Here are the Standard options specific to filefabric (Enterprise File Fabric).
#### --filefabric-url
@ -213,7 +213,7 @@ Properties:
### Advanced options
Here are the advanced options specific to filefabric (Enterprise File Fabric).
Here are the Advanced options specific to filefabric (Enterprise File Fabric).
#### --filefabric-token

View file

@ -38,6 +38,7 @@ These flags are available for every command.
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features (use --disable help to see a list)
--disable-http-keep-alives Disable HTTP keep-alives and use each connection once.
--disable-http2 Disable HTTP/2 in the global transport
-n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
@ -86,6 +87,8 @@ These flags are available for every command.
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
--memprofile string Write memory profile to file
-M, --metadata If set, preserve metadata when copying objects
--metadata-set stringArray Add metadata key=value when uploading
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
@ -157,7 +160,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@ -212,6 +215,7 @@ and may be set in the config file.
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
--box-access-token string Box App Primary Access Token
--box-auth-url string Auth server URL
@ -251,6 +255,7 @@ and may be set in the config file.
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
--combine-upstreams SpaceSepList Upstreams for combining
--compress-level int GZIP compression level (-2 to 9) (default -1)
--compress-mode string Compression mode (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
@ -283,6 +288,7 @@ and may be set in the config file.
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive
--drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs
@ -337,8 +343,8 @@ and may be set in the config file.
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-host string FTP host to connect to
@ -357,8 +363,10 @@ and may be set in the config file.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
--gcs-project-number string Project number
--gcs-service-account-file string Service Account Credentials JSON file path
@ -384,10 +392,24 @@ and may be set in the config file.
--hdfs-namenode string Hadoop name node and port
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
--hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
--hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default "user")
--hidrive-token string OAuth Access Token as a JSON blob
--hidrive-token-url string Token server url
--hidrive-upload-concurrency int Concurrency for chunked uploads (default 4)
--hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi)
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--http-url string URL of HTTP host to connect to
--hubic-auth-url string Auth server URL
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--hubic-client-id string OAuth Client Id
@ -396,6 +418,13 @@ and may be set in the config file.
--hubic-no-chunk Don't chunk files during streaming upload
--hubic-token string OAuth Access Token as a JSON blob
--hubic-token-url string Token server url
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
@ -417,7 +446,7 @@ and may be set in the config file.
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc string Disable UNC (long path names) conversion on Windows
--local-nounc Disable UNC (long path names) conversion on Windows
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
@ -438,11 +467,11 @@ and may be set in the config file.
--netstorage-protocol string Select between HTTP or HTTPS protocol (default "https")
--netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
--onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access)
--onedrive-auth-url string Auth server URL
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-disable-site-permission Disable the request for Sites.Read.All permission
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
@ -466,9 +495,11 @@ and may be set in the config file.
--pcloud-client-secret string OAuth Client Secret
--pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--qingstor-access-key-id string QingStor Access Key ID
@ -521,6 +552,7 @@ and may be set in the config file.
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
@ -531,6 +563,8 @@ and may be set in the config file.
--seafile-url string URL of seafile host to connect to
--seafile-user string User name (usually email address)
--sftp-ask-password Allow asking for SFTP password when needed
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@ -543,12 +577,14 @@ and may be set in the config file.
--sftp-known-hosts-file string Optional path to known_hosts file
--sftp-md5sum-command string The command used to read md5 hashes
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH connection
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
--sftp-set-modtime Set the modified time on the remote if set (default true)
--sftp-sha1sum-command string The command used to read sha1 hashes
--sftp-shell-type string The type of SSH shell on remote server, if any
--sftp-skip-links Set to skip any symlinks and any other non regular files
--sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp")
--sftp-use-fstat If set use fstat instead of stat
@ -605,6 +641,7 @@ and may be set in the config file.
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
@ -616,7 +653,7 @@ and may be set in the config file.
--webdav-pass string Password (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret

View file

@ -138,7 +138,7 @@ Just hit a selection number when prompted.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to ftp (FTP Connection).
Here are the Standard options specific to ftp (FTP).
#### --ftp-host
@ -221,7 +221,7 @@ Properties:
### Advanced options
Here are the advanced options specific to ftp (FTP Connection).
Here are the Advanced options specific to ftp (FTP).
#### --ftp-concurrency

View file

@ -273,7 +273,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
#### --gcs-client-id
@ -548,7 +548,7 @@ Properties:
### Advanced options
Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
#### --gcs-token
@ -587,6 +587,40 @@ Properties:
- Type: string
- Required: false
#### --gcs-no-check-bucket
If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
Properties:
- Config: no_check_bucket
- Env Var: RCLONE_GCS_NO_CHECK_BUCKET
- Type: bool
- Default: false
#### --gcs-decompress
If set this will decompress gzip encoded objects.
It is possible to upload objects to GCS with "Content-Encoding: gzip"
set. Normally rclone will download these files files as compressed objects.
If this flag is set then rclone will decompress these files with
"Content-Encoding: gzip" as they are received. This means that rclone
can't check the size and hash but the file contents will be decompressed.
Properties:
- Config: decompress
- Env Var: RCLONE_GCS_DECOMPRESS
- Type: bool
- Default: false
#### --gcs-encoding
The encoding for the backend.

View file

@ -224,7 +224,7 @@ This is similar to the Sharing tab in the Google Photos web interface.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlephotos/googlephotos.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to google photos (Google Photos).
Here are the Standard options specific to google photos (Google Photos).
#### --gphotos-client-id
@ -268,7 +268,7 @@ Properties:
### Advanced options
Here are the advanced options specific to google photos (Google Photos).
Here are the Advanced options specific to google photos (Google Photos).
#### --gphotos-token

View file

@ -172,7 +172,7 @@ or by full re-read/re-write of the files.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hasher/hasher.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to hasher (Better checksums for other remotes).
Here are the Standard options specific to hasher (Better checksums for other remotes).
#### --hasher-remote
@ -209,7 +209,7 @@ Properties:
### Advanced options
Here are the advanced options specific to hasher (Better checksums for other remotes).
Here are the Advanced options specific to hasher (Better checksums for other remotes).
#### --hasher-auto-size
@ -222,6 +222,12 @@ Properties:
- Type: SizeSuffix
- Default: 0
### Metadata
Any metadata supported by the underlying remote is read and written.
See the [metadata](/docs/#metadata) docs for more info.
## Backend commands
Here are the commands specific to the hasher backend.
@ -232,7 +238,7 @@ Run them with
The help below will explain what arguments each command takes.
See [the "rclone backend" command](/commands/rclone_backend/) for more
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command

View file

@ -151,7 +151,7 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8).
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hdfs/hdfs.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to hdfs (Hadoop distributed file system).
Here are the Standard options specific to hdfs (Hadoop distributed file system).
#### --hdfs-namenode
@ -182,7 +182,7 @@ Properties:
### Advanced options
Here are the advanced options specific to hdfs (Hadoop distributed file system).
Here are the Advanced options specific to hdfs (Hadoop distributed file system).
#### --hdfs-service-principal-name

View file

@ -193,7 +193,7 @@ See the below section about configuration options for more details.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hidrive/hidrive.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to hidrive (HiDrive).
Here are the Standard options specific to hidrive (HiDrive).
#### --hidrive-client-id
@ -239,7 +239,7 @@ Properties:
### Advanced options
Here are the advanced options specific to hidrive (HiDrive).
Here are the Advanced options specific to hidrive (HiDrive).
#### --hidrive-token
@ -346,25 +346,6 @@ Properties:
- Type: bool
- Default: false
#### --hidrive-disable-unicode-normalization
Do not apply Unicode "Normalization Form C" to remote paths.
In Unicode there are multiple valid representations for the same abstract character.
They (should) result in the same visual appearance, but are represented by different byte-sequences.
This is known as canonical equivalence.
In HiDrive paths are always represented as byte-sequences.
This means that two paths that are canonically equivalent (and therefore look the same) are treated as two distinct paths.
As this behaviour may be undesired, by default rclone will apply unicode normalization to paths it will access.
Properties:
- Config: disable_unicode_normalization
- Env Var: RCLONE_HIDRIVE_DISABLE_UNICODE_NORMALIZATION
- Type: bool
- Default: false
#### --hidrive-chunk-size
Chunksize for chunked uploads.

View file

@ -126,11 +126,11 @@ or:
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/http/http.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to http (http Connection).
Here are the Standard options specific to http (HTTP).
#### --http-url
URL of http host to connect to.
URL of HTTP host to connect to.
E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.
@ -143,7 +143,7 @@ Properties:
### Advanced options
Here are the advanced options specific to http (http Connection).
Here are the Advanced options specific to http (HTTP).
#### --http-headers

View file

@ -109,7 +109,7 @@ are the same.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hubic/hubic.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to hubic (Hubic).
Here are the Standard options specific to hubic (Hubic).
#### --hubic-client-id
@ -139,7 +139,7 @@ Properties:
### Advanced options
Here are the advanced options specific to hubic (Hubic).
Here are the Advanced options specific to hubic (Hubic).
#### --hubic-token

View file

@ -146,7 +146,7 @@ y/e/d> y
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/internetarchive/internetarchive.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to internetarchive (Internet Archive).
Here are the Standard options specific to internetarchive (Internet Archive).
#### --internetarchive-access-key-id
@ -177,7 +177,7 @@ Properties:
### Advanced options
Here are the advanced options specific to internetarchive (Internet Archive).
Here are the Advanced options specific to internetarchive (Internet Archive).
#### --internetarchive-endpoint
@ -246,4 +246,32 @@ Properties:
- Type: MultiEncoder
- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
### Metadata
Metadata fields provided by Internet Archive.
If there are multiple values for a key, only the first one is returned.
This is a limitation of Rclone, that supports one value per one key.
Owner is able to add custom keys. Metadata feature grabs all the keys including them.
Here are the possible system metadata items for the internetarchive backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | N |
| format | Name of format identified by Internet Archive | string | Comma-Separated Values | N |
| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | N |
| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | N |
| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | N |
| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N |
| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | N |
| size | File size in bytes | decimal number | 123456 | N |
| source | The source of the file | string | original | N |
| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | N |
See the [metadata](/docs/#metadata) docs for more info.
{{< rem autogenerated options stop >}}

View file

@ -266,7 +266,7 @@ and the current usage.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/jottacloud/jottacloud.go then run make backenddocs" >}}
### Advanced options
Here are the advanced options specific to jottacloud (Jottacloud).
Here are the Advanced options specific to jottacloud (Jottacloud).
#### --jottacloud-md5-memory-limit

View file

@ -113,7 +113,7 @@ as they can't be used in XML strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/koofr/koofr.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
#### --koofr-provider
@ -200,7 +200,7 @@ Properties:
### Advanced options
Here are the advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
#### --koofr-mountid

View file

@ -327,7 +327,7 @@ where it isn't supported (e.g. Windows) it will be ignored.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/local/local.go then run make backenddocs" >}}
### Advanced options
Here are the advanced options specific to local (Local Disk).
Here are the Advanced options specific to local (Local Disk).
#### --local-nounc
@ -337,8 +337,8 @@ Properties:
- Config: nounc
- Env Var: RCLONE_LOCAL_NOUNC
- Type: string
- Required: false
- Type: bool
- Default: false
- Examples:
- "true"
- Disables long file names.
@ -586,7 +586,6 @@ Here are the possible system metadata items for the local backend.
| rdev | Device ID (if special file) | hexadecimal | 1abc | N |
| uid | User ID of owner | decimal number | 500 | N |
See the [metadata](/docs/#metadata) docs for more info.
## Backend commands
@ -599,7 +598,7 @@ Run them with
The help below will explain what arguments each command takes.
See [the "rclone backend" command](/commands/rclone_backend/) for more
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command

View file

@ -156,7 +156,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mailru/mailru.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to mailru (Mail.ru Cloud).
Here are the Standard options specific to mailru (Mail.ru Cloud).
#### --mailru-user
@ -209,7 +209,7 @@ Properties:
### Advanced options
Here are the advanced options specific to mailru (Mail.ru Cloud).
Here are the Advanced options specific to mailru (Mail.ru Cloud).
#### --mailru-speedup-file-patterns

View file

@ -192,7 +192,7 @@ have got the remote blocked for a while.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mega/mega.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to mega (Mega).
Here are the Standard options specific to mega (Mega).
#### --mega-user
@ -220,7 +220,7 @@ Properties:
### Advanced options
Here are the advanced options specific to mega (Mega).
Here are the Advanced options specific to mega (Mega).
#### --mega-debug

View file

@ -177,7 +177,7 @@ NetStorage remote supports the purge feature by using the "quick-delete" NetStor
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/netstorage/netstorage.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to netstorage (Akamai NetStorage).
Here are the Standard options specific to netstorage (Akamai NetStorage).
#### --netstorage-host
@ -220,7 +220,7 @@ Properties:
### Advanced options
Here are the advanced options specific to netstorage (Akamai NetStorage).
Here are the Advanced options specific to netstorage (Akamai NetStorage).
#### --netstorage-protocol
@ -251,7 +251,7 @@ Run them with
The help below will explain what arguments each command takes.
See [the "rclone backend" command](/commands/rclone_backend/) for more
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
@ -277,10 +277,4 @@ the object that will be the target of the symlink (for example, /links/mylink).
Include the file extension for the object, if applicable.
`rclone backend symlink <src> <path>`
## Support
If you have any questions or issues, please contact [Akamai Technical Support
via Control Center or by
phone](https://control.akamai.com/apps/support-ui/#/contact-support).
{{< rem autogenerated options stop >}}

View file

@ -217,7 +217,7 @@ the OneDrive website.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/onedrive/onedrive.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to onedrive (Microsoft OneDrive).
Here are the Standard options specific to onedrive (Microsoft OneDrive).
#### --onedrive-client-id
@ -267,7 +267,7 @@ Properties:
### Advanced options
Here are the advanced options specific to onedrive (Microsoft OneDrive).
Here are the Advanced options specific to onedrive (Microsoft OneDrive).
#### --onedrive-token
@ -359,6 +359,28 @@ Properties:
- Type: string
- Required: false
#### --onedrive-access-scopes
Set scopes to be requested by rclone.
Choose or manually enter a custom space separated list with all scopes, that rclone should request.
Properties:
- Config: access_scopes
- Env Var: RCLONE_ONEDRIVE_ACCESS_SCOPES
- Type: SpaceSepList
- Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access
- Examples:
- "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access"
- Read and write access to all resources
- "Files.Read Files.Read.All Sites.Read.All offline_access"
- Read only access to all resources
- "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access"
- Read and write access to all resources, without the ability to browse SharePoint sites.
- Same as if disable_site_permission was set to true
#### --onedrive-disable-site-permission
Disable the request for Sites.Read.All permission.

View file

@ -102,7 +102,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/opendrive/opendrive.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to opendrive (OpenDrive).
Here are the Standard options specific to opendrive (OpenDrive).
#### --opendrive-username
@ -130,7 +130,7 @@ Properties:
### Advanced options
Here are the advanced options specific to opendrive (OpenDrive).
Here are the Advanced options specific to opendrive (OpenDrive).
#### --opendrive-encoding

View file

@ -144,7 +144,7 @@ the `root_folder_id` in the config.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/pcloud/pcloud.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to pcloud (Pcloud).
Here are the Standard options specific to pcloud (Pcloud).
#### --pcloud-client-id
@ -174,7 +174,7 @@ Properties:
### Advanced options
Here are the advanced options specific to pcloud (Pcloud).
Here are the Advanced options specific to pcloud (Pcloud).
#### --pcloud-token

View file

@ -104,7 +104,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/premiumizeme/premiumizeme.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to premiumizeme (premiumize.me).
Here are the Standard options specific to premiumizeme (premiumize.me).
#### --premiumizeme-api-key
@ -122,7 +122,7 @@ Properties:
### Advanced options
Here are the advanced options specific to premiumizeme (premiumize.me).
Here are the Advanced options specific to premiumizeme (premiumize.me).
#### --premiumizeme-encoding

View file

@ -111,7 +111,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/putio/putio.go then run make backenddocs" >}}
### Advanced options
Here are the advanced options specific to putio (Put.io).
Here are the Advanced options specific to putio (Put.io).
#### --putio-encoding

View file

@ -144,7 +144,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/qingstor/qingstor.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to qingstor (QingCloud Object Storage).
Here are the Standard options specific to qingstor (QingCloud Object Storage).
#### --qingstor-env-auth
@ -228,7 +228,7 @@ Properties:
### Advanced options
Here are the advanced options specific to qingstor (QingCloud Object Storage).
Here are the Advanced options specific to qingstor (QingCloud Object Storage).
#### --qingstor-connection-retries

View file

@ -544,6 +544,7 @@ This takes the following parameters:
- state - state to restart with - used with continue
- result - result to restart with - used with continue
See the [config create](/commands/rclone_config_create/) command for more information on the above.
**Authentication is required for this call.**
@ -595,6 +596,7 @@ This takes the following parameters:
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
See the [config password](/commands/rclone_config_password/) command for more information on the above.
**Authentication is required for this call.**
@ -623,6 +625,7 @@ This takes the following parameters:
- state - state to restart with - used with continue
- result - result to restart with - used with continue
See the [config update](/commands/rclone_config_update/) command for more information on the above.
**Authentication is required for this call.**
@ -1069,7 +1072,7 @@ This takes the following parameters:
The result is as returned from rclone about --json
See the [about](/commands/rclone_size/) command for more information on the above.
See the [about](/commands/rclone_about/) command for more information on the above.
**Authentication is required for this call.**
@ -1101,7 +1104,7 @@ This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- url - string, URL to read from
- autoFilename - boolean, set to true to retrieve destination file name from url
- autoFilename - boolean, set to true to retrieve destination file name from url
See the [copyurl](/commands/rclone_copyurl/) command for more information on the above.
@ -1138,46 +1141,103 @@ This returns info about the remote passed in;
```
{
// optional features and whether they are available or not
"Features": {
"About": true,
"BucketBased": false,
"CanHaveEmptyDirectories": true,
"CaseInsensitive": false,
"ChangeNotify": false,
"CleanUp": false,
"Copy": false,
"DirCacheFlush": false,
"DirMove": true,
"DuplicateFiles": false,
"GetTier": false,
"ListR": false,
"MergeDirs": false,
"Move": true,
"OpenWriterAt": true,
"PublicLink": false,
"Purge": true,
"PutStream": true,
"PutUnchecked": false,
"ReadMimeType": false,
"ServerSideAcrossConfigs": false,
"SetTier": false,
"SetWrapper": false,
"UnWrap": false,
"WrapFs": false,
"WriteMimeType": false
},
// Names of hashes available
"Hashes": [
"MD5",
"SHA-1",
"DropboxHash",
"QuickXorHash"
],
"Name": "local", // Name as created
"Precision": 1, // Precision of timestamps in ns
"Root": "/", // Path as created
"String": "Local file system at /" // how the remote will appear in logs
// optional features and whether they are available or not
"Features": {
"About": true,
"BucketBased": false,
"BucketBasedRootOK": false,
"CanHaveEmptyDirectories": true,
"CaseInsensitive": false,
"ChangeNotify": false,
"CleanUp": false,
"Command": true,
"Copy": false,
"DirCacheFlush": false,
"DirMove": true,
"Disconnect": false,
"DuplicateFiles": false,
"GetTier": false,
"IsLocal": true,
"ListR": false,
"MergeDirs": false,
"MetadataInfo": true,
"Move": true,
"OpenWriterAt": true,
"PublicLink": false,
"Purge": true,
"PutStream": true,
"PutUnchecked": false,
"ReadMetadata": true,
"ReadMimeType": false,
"ServerSideAcrossConfigs": false,
"SetTier": false,
"SetWrapper": false,
"Shutdown": false,
"SlowHash": true,
"SlowModTime": false,
"UnWrap": false,
"UserInfo": false,
"UserMetadata": true,
"WrapFs": false,
"WriteMetadata": true,
"WriteMimeType": false
},
// Names of hashes available
"Hashes": [
"md5",
"sha1",
"whirlpool",
"crc32",
"sha256",
"dropbox",
"mailru",
"quickxor"
],
"Name": "local", // Name as created
"Precision": 1, // Precision of timestamps in ns
"Root": "/", // Path as created
"String": "Local file system at /", // how the remote will appear in logs
// Information about the system metadata for this backend
"MetadataInfo": {
"System": {
"atime": {
"Help": "Time of last access",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"btime": {
"Help": "Time of file birth (creation)",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"gid": {
"Help": "Group ID of owner",
"Type": "decimal number",
"Example": "500"
},
"mode": {
"Help": "File type and mode",
"Type": "octal, unix style",
"Example": "0100664"
},
"mtime": {
"Help": "Time of last modification",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"rdev": {
"Help": "Device ID (if special file)",
"Type": "hexadecimal",
"Example": "1abc"
},
"uid": {
"Help": "User ID of owner",
"Type": "decimal number",
"Example": "500"
}
},
"Help": "Textual help string\n"
}
}
```
@ -1200,6 +1260,7 @@ This takes the following parameters:
- noMimeType - If set don't show mime types
- dirsOnly - If set only show directories
- filesOnly - If set only show files
- metadata - If set return metadata of objects also
- hashTypes - array of strings of hash types to show if showHash set
Returns:
@ -1207,7 +1268,7 @@ Returns:
- list
- This is an array of objects as described in the lsjson command
See the [lsjson](/commands/rclone_lsjson/) for more information on the above and examples.
See the [lsjson](/commands/rclone_lsjson/) command for more information on the above and examples.
**Authentication is required for this call.**
@ -1294,7 +1355,6 @@ Returns:
- count - number of files
- bytes - number of bytes in those files
- sizeless - number of files with unknown size, included in count but not accounted for in bytes
See the [size](/commands/rclone_size/) command for more information on the above.
@ -1316,7 +1376,7 @@ The result is
Note that if you are only interested in files then it is much more
efficient to set the filesOnly flag in the options.
See the [lsjson](/commands/rclone_lsjson/) for more information on the above and examples.
See the [lsjson](/commands/rclone_lsjson/) command for more information on the above and examples.
**Authentication is required for this call.**
@ -1542,6 +1602,7 @@ This takes the following parameters:
- dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
See the [copy](/commands/rclone_copy/) command for more information on the above.
**Authentication is required for this call.**
@ -1555,6 +1616,7 @@ This takes the following parameters:
- createEmptySrcDirs - create empty src directories on destination if set
- deleteEmptySrcDirs - delete empty src directories if set
See the [move](/commands/rclone_move/) command for more information on the above.
**Authentication is required for this call.**

View file

@ -13,7 +13,7 @@ The S3 backend can be used with a number of different providers:
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
{{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}}
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud-object-storage-aos" >}}
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
{{< provider name="Huawei OBS" home="https://www.huaweicloud.com/intl/en-us/product/obs.html" config="/s3/#huawei-obs" >}}
@ -571,7 +571,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
#### --s3-provider
@ -592,6 +592,8 @@ Properties:
- Ceph Object Storage
- "ChinaMobile"
- China Mobile Ecloud Elastic Object Storage (EOS)
- "Cloudflare"
- Cloudflare R2 Storage
- "ArvanCloud"
- Arvan Cloud Object Storage (AOS)
- "DigitalOcean"
@ -828,6 +830,67 @@ Properties:
- Amsterdam, The Netherlands
- "fr-par"
- Paris, France
- "pl-waw"
- Warsaw, Poland
#### --s3-region
Region to connect to. - the location where your bucket will be created and your data stored. Need bo be same with your endpoint.
Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
- Provider: HuaweiOBS
- Type: string
- Required: false
- Examples:
- "af-south-1"
- AF-Johannesburg
- "ap-southeast-2"
- AP-Bangkok
- "ap-southeast-3"
- AP-Singapore
- "cn-east-3"
- CN East-Shanghai1
- "cn-east-2"
- CN East-Shanghai2
- "cn-north-1"
- CN North-Beijing1
- "cn-north-4"
- CN North-Beijing4
- "cn-south-1"
- CN South-Guangzhou
- "ap-southeast-1"
- CN-Hong Kong
- "sa-argentina-1"
- LA-Buenos Aires1
- "sa-peru-1"
- LA-Lima1
- "na-mexico-1"
- LA-Mexico City1
- "sa-chile-1"
- LA-Santiago2
- "sa-brazil-1"
- LA-Sao Paulo1
- "ru-northwest-2"
- RU-Moscow2
#### --s3-region
Region to connect to.
Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
- Provider: Cloudflare
- Type: string
- Required: false
- Examples:
- "auto"
- R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
#### --s3-region
@ -839,7 +902,7 @@ Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
- Provider: !AWS,Alibaba,ChinaMobile,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
- Type: string
- Required: false
- Examples:
@ -868,6 +931,8 @@ Properties:
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider: ChinaMobile
@ -925,7 +990,7 @@ Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
- Gansu China (Lanzhou)
- "eos-shanxi-1.cmecloud.cn"
- Shanxi China (Taiyuan)
- eos-liaoning-1.cmecloud.cn"
- "eos-liaoning-1.cmecloud.cn"
- Liaoning China (Shenyang)
- "eos-hebei-1.cmecloud.cn"
- Hebei China (Shijiazhuang)
@ -940,6 +1005,8 @@ Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
Endpoint for Arvan Cloud Object Storage (AOS) API.
Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider: ArvanCloud
@ -952,50 +1019,6 @@ Endpoint for Arvan Cloud Object Storage (AOS) API.
- "s3.ir-tbz-sh1.arvanstorage.com"
- Tabriz Iran (Shahriar)
#### --s3-endpoint
Endpoint for Huawei Cloud Object Storage Service (OBS) API.
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider: HuaweiOBS
- Type: string
- Required: false
- Examples:
- "obs.af-south-1.myhuaweicloud.com"
- AF-Johannesburg Endpoint
- "obs.ap-southeast-2.myhuaweicloud.com"
- AP-Bangkok Endpoint
- "obs.ap-southeast-3.myhuaweicloud.com"
- AP-Singapore Endpoint
- "obs.cn-east-3.myhuaweicloud.com"
- CN East-Shanghai1 Endpoint
- "obs.cn-east-2.myhuaweicloud.com"
- CN East-Shanghai2 Endpoint
- "obs.cn-north-1.myhuaweicloud.com"
- CN North-Beijing1 Endpoint
- "obs.cn-north-4.myhuaweicloud.com"
- CN North-Beijing4 Endpoint
- "obs.cn-south-1.myhuaweicloud.com"
- CN South-Guangzhou Endpoint
- "obs.ap-southeast-1.myhuaweicloud.com"
- CN-Hong Kong Endpoint
- "obs.sa-argentina-1.myhuaweicloud.com"
- LA-Buenos Aires1 Endpoint
- "obs.sa-peru-1.myhuaweicloud.com"
- LA-Lima1 Endpoint
- "obs.na-mexico-1.myhuaweicloud.com"
- LA-Mexico City1 Endpoint
- "obs.sa-chile-1.myhuaweicloud.com"
- LA-Santiago2 Endpoint
- "obs.sa-brazil-1.myhuaweicloud.com"
- LA-Sao Paulo1 Endpoint
- "obs.ru-northwest-2.myhuaweicloud.com"
- RU-Moscow2 Endpoint
#### --s3-endpoint
Endpoint for IBM COS S3 API.
@ -1200,6 +1223,49 @@ Properties:
#### --s3-endpoint
Endpoint for OBS API.
Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider: HuaweiOBS
- Type: string
- Required: false
- Examples:
- "obs.af-south-1.myhuaweicloud.com"
- AF-Johannesburg
- "obs.ap-southeast-2.myhuaweicloud.com"
- AP-Bangkok
- "obs.ap-southeast-3.myhuaweicloud.com"
- AP-Singapore
- "obs.cn-east-3.myhuaweicloud.com"
- CN East-Shanghai1
- "obs.cn-east-2.myhuaweicloud.com"
- CN East-Shanghai2
- "obs.cn-north-1.myhuaweicloud.com"
- CN North-Beijing1
- "obs.cn-north-4.myhuaweicloud.com"
- CN North-Beijing4
- "obs.cn-south-1.myhuaweicloud.com"
- CN South-Guangzhou
- "obs.ap-southeast-1.myhuaweicloud.com"
- CN-Hong Kong
- "obs.sa-argentina-1.myhuaweicloud.com"
- LA-Buenos Aires1
- "obs.sa-peru-1.myhuaweicloud.com"
- LA-Lima1
- "obs.na-mexico-1.myhuaweicloud.com"
- LA-Mexico City1
- "obs.sa-chile-1.myhuaweicloud.com"
- LA-Santiago2
- "obs.sa-brazil-1.myhuaweicloud.com"
- LA-Sao Paulo1
- "obs.ru-northwest-2.myhuaweicloud.com"
- RU-Moscow2
#### --s3-endpoint
Endpoint for Scaleway Object Storage.
Properties:
@ -1214,6 +1280,8 @@ Properties:
- Amsterdam Endpoint
- "s3.fr-par.scw.cloud"
- Paris Endpoint
- "s3.pl-waw.scw.cloud"
- Warsaw Endpoint
#### --s3-endpoint
@ -1365,7 +1433,7 @@ Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider: !AWS,IBMCOS,IDrive,TencentCOS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,HuaweiOBS
- Provider: !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp
- Type: string
- Required: false
- Examples:
@ -1465,6 +1533,100 @@ Properties:
#### --s3-location-constraint
Location constraint - must match endpoint.
Used when creating buckets only.
Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Provider: ChinaMobile
- Type: string
- Required: false
- Examples:
- "wuxi1"
- East China (Suzhou)
- "jinan1"
- East China (Jinan)
- "ningbo1"
- East China (Hangzhou)
- "shanghai1"
- East China (Shanghai-1)
- "zhengzhou1"
- Central China (Zhengzhou)
- "hunan1"
- Central China (Changsha-1)
- "zhuzhou1"
- Central China (Changsha-2)
- "guangzhou1"
- South China (Guangzhou-2)
- "dongguan1"
- South China (Guangzhou-3)
- "beijing1"
- North China (Beijing-1)
- "beijing2"
- North China (Beijing-2)
- "beijing4"
- North China (Beijing-3)
- "huhehaote1"
- North China (Huhehaote)
- "chengdu1"
- Southwest China (Chengdu)
- "chongqing1"
- Southwest China (Chongqing)
- "guiyang1"
- Southwest China (Guiyang)
- "xian1"
- Nouthwest China (Xian)
- "yunnan"
- Yunnan China (Kunming)
- "yunnan2"
- Yunnan China (Kunming-2)
- "tianjin1"
- Tianjin China (Tianjin)
- "jilin1"
- Jilin China (Changchun)
- "hubei1"
- Hubei China (Xiangyan)
- "jiangxi1"
- Jiangxi China (Nanchang)
- "gansu1"
- Gansu China (Lanzhou)
- "shanxi1"
- Shanxi China (Taiyuan)
- "liaoning1"
- Liaoning China (Shenyang)
- "hebei1"
- Hebei China (Shijiazhuang)
- "fujian1"
- Fujian China (Xiamen)
- "guangxi1"
- Guangxi China (Nanning)
- "anhui1"
- Anhui China (Huainan)
#### --s3-location-constraint
Location constraint - must match endpoint.
Used when creating buckets only.
Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Provider: ArvanCloud
- Type: string
- Required: false
- Examples:
- "ir-thr-at1"
- Tehran Iran (Asiatech)
- "ir-tbz-sh1"
- Tabriz Iran (Shahriar)
#### --s3-location-constraint
Location constraint - must match endpoint when using IBM Cloud Public.
For on-prem COS, do not make a selection from this list, hit enter.
@ -1604,7 +1766,7 @@ Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Provider: !AWS,IBMCOS,IDrive,Alibaba,ChinaMobile,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS,HuaweiOBS
- Provider: !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS
- Type: string
- Required: false
@ -1623,7 +1785,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
- Provider: !Storj
- Provider: !Storj,Cloudflare
- Type: string
- Required: false
- Examples:
@ -1676,7 +1838,7 @@ Properties:
- Config: server_side_encryption
- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
- Provider: AWS,Ceph,ChinaMobile,ArvanCloud,Minio
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
@ -1760,6 +1922,8 @@ Properties:
The storage class to use when storing new objects in ChinaMobile.
Properties:
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
- Provider: ChinaMobile
@ -1779,6 +1943,8 @@ The storage class to use when storing new objects in ChinaMobile.
The storage class to use when storing new objects in ArvanCloud.
Properties:
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
- Provider: ArvanCloud
@ -1832,7 +1998,7 @@ Properties:
### Advanced options
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
#### --s3-bucket-acl
@ -1884,7 +2050,7 @@ Properties:
- Config: sse_customer_algorithm
- Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM
- Provider: AWS,Ceph,ChinaMobile,ArvanCloud,Minio
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
@ -1901,7 +2067,7 @@ Properties:
- Config: sse_customer_key
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY
- Provider: AWS,Ceph,ChinaMobile,ArvanCloud,Minio
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
@ -1919,7 +2085,7 @@ Properties:
- Config: sse_customer_key_md5
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5
- Provider: AWS,Ceph,ChinaMobile,ArvanCloud,Minio
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
@ -1964,6 +2130,13 @@ most 10,000 chunks, this means that by default the maximum size of
a file you can stream upload is 48 GiB. If you wish to stream upload
larger files then you will need to increase chunk_size.
Increasing the chunk size decreases the accuracy of the progress
statistics displayed with "-P" flag. Rclone treats chunk as sent when
it's buffered by the AWS SDK, when in fact it may still be uploading.
A bigger chunk size means a bigger AWS SDK buffer and progress
reporting more deviating from the truth.
Properties:
- Config: chunk_size
@ -2369,6 +2542,26 @@ Properties:
- Type: Tristate
- Default: unset
#### --s3-use-presigned-request
Whether to use a presigned request or PutObject for single part uploads
If this is false rclone will use PutObject from the AWS SDK to upload
an object.
Versions of rclone < 1.59 use presigned requests to upload a single
part object and setting this flag to true will re-enable that
functionality. This shouldn't be necessary except in exceptional
circumstances or for testing.
Properties:
- Config: use_presigned_request
- Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST
- Type: bool
- Default: false
### Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
@ -2386,7 +2579,6 @@ Here are the possible system metadata items for the s3 backend.
| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
| tier | Tier of the object | string | GLACIER | **Y** |
See the [metadata](/docs/#metadata) docs for more info.
## Backend commands
@ -2399,7 +2591,7 @@ Run them with
The help below will explain what arguments each command takes.
See [the "rclone backend" command](/commands/rclone_backend/) for more
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
@ -3991,7 +4183,7 @@ d) Delete this remote
y/e/d> y
```
### ArvanCloud
### ArvanCloud {#arvan-cloud}
[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage.
It gives you access to backup and archived files and allows sharing.

View file

@ -266,7 +266,7 @@ Versions between 6.0 and 6.3 haven't been tested and might not work properly.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/seafile/seafile.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to seafile (seafile).
Here are the Standard options specific to seafile (seafile).
#### --seafile-url
@ -358,7 +358,7 @@ Properties:
### Advanced options
Here are the advanced options specific to seafile (seafile).
Here are the Advanced options specific to seafile (seafile).
#### --seafile-create-library

View file

@ -388,7 +388,7 @@ with a Windows OpenSSH server, rclone will use a built-in shell command
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sftp/sftp.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to sftp (SSH/SFTP Connection).
Here are the Standard options specific to sftp (SSH/SFTP).
#### --sftp-host
@ -514,7 +514,7 @@ Properties:
#### --sftp-use-insecure-cipher
Enable the use of insecure ciphers and key exchange methods.
Enable the use of insecure ciphers and key exchange methods.
This enables the use of the following insecure ciphers and key exchange methods:
@ -554,7 +554,7 @@ Properties:
### Advanced options
Here are the advanced options specific to sftp (SSH/SFTP Connection).
Here are the Advanced options specific to sftp (SSH/SFTP).
#### --sftp-known-hosts-file
@ -592,16 +592,16 @@ Properties:
#### --sftp-path-override
Override path used by SSH connection.
Override path used by SSH shell commands.
This allows checksum calculation when SFTP and SSH paths are
different. This issue affects among others Synology NAS boxes.
Shared folders can be found in directories representing volumes
E.g. if shared folders can be found in directories representing volumes:
rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory
Home directory can be found in a shared folder called "home"
E.g. if home directory can be found in a shared folder called "home":
rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory
@ -623,6 +623,28 @@ Properties:
- Type: bool
- Default: true
#### --sftp-shell-type
The type of SSH shell on remote server, if any.
Leave blank for autodetect.
Properties:
- Config: shell_type
- Env Var: RCLONE_SFTP_SHELL_TYPE
- Type: string
- Required: false
- Examples:
- "none"
- No shell access
- "unix"
- Unix shell
- "powershell"
- PowerShell
- "cmd"
- Windows Command Prompt
#### --sftp-md5sum-command
The command used to read md5 hashes.
@ -763,6 +785,75 @@ Properties:
- Type: Duration
- Default: 1m0s
#### --sftp-chunk-size
Upload and download chunk size.
This controls the maximum packet size used in the SFTP protocol. The
RFC limits this to 32768 bytes (32k), however a lot of servers
support larger sizes and setting it larger will increase transfer
speed dramatically on high latency links.
Only use a setting higher than 32k if you always connect to the same
server or after sufficiently broad testing.
For example using the value of 252k with OpenSSH works well with its
maximum packet size of 256k.
If you get the error "failed to send packet header: EOF" when copying
a large file, try lowering this number.
Properties:
- Config: chunk_size
- Env Var: RCLONE_SFTP_CHUNK_SIZE
- Type: SizeSuffix
- Default: 32Ki
#### --sftp-concurrency
The maximum number of outstanding requests for one file
This controls the maximum number of outstanding requests for one file.
Increasing it will increase throughput on high latency links at the
cost of using more memory.
Properties:
- Config: concurrency
- Env Var: RCLONE_SFTP_CONCURRENCY
- Type: int
- Default: 64
#### --sftp-set-env
Environment variables to pass to sftp and commands
Set environment variables in the form:
VAR=value
to be passed to the sftp client and to any commands run (eg md5sum).
Pass multiple variables space separated, eg
VAR1=value VAR2=value
and pass variables with spaces in in quotes, eg
"VAR3=value with space" "VAR4=value with space" VAR5=nospacehere
Properties:
- Config: set_env
- Env Var: RCLONE_SFTP_SET_ENV
- Type: SpaceSepList
- Default:
{{< rem autogenerated options stop >}}
## Limitations

View file

@ -150,7 +150,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sharefile/sharefile.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to sharefile (Citrix Sharefile).
Here are the Standard options specific to sharefile (Citrix Sharefile).
#### --sharefile-root-folder-id
@ -179,7 +179,7 @@ Properties:
### Advanced options
Here are the advanced options specific to sharefile (Citrix Sharefile).
Here are the Advanced options specific to sharefile (Citrix Sharefile).
#### --sharefile-upload-cutoff

View file

@ -132,7 +132,7 @@ rclone copy /home/source mySia:backup
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sia/sia.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to sia (Sia Decentralized Cloud).
Here are the Standard options specific to sia (Sia Decentralized Cloud).
#### --sia-api-url
@ -165,7 +165,7 @@ Properties:
### Advanced options
Here are the advanced options specific to sia (Sia Decentralized Cloud).
Here are the Advanced options specific to sia (Sia Decentralized Cloud).
#### --sia-user-agent

View file

@ -215,7 +215,7 @@ y/e/d> y
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/storj/storj.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to storj (Storj Decentralized Cloud Storage).
Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).
#### --storj-provider

View file

@ -123,7 +123,7 @@ deleted straight away.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sugarsync/sugarsync.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to sugarsync (Sugarsync).
Here are the Standard options specific to sugarsync (Sugarsync).
#### --sugarsync-app-id
@ -178,7 +178,7 @@ Properties:
### Advanced options
Here are the advanced options specific to sugarsync (Sugarsync).
Here are the Advanced options specific to sugarsync (Sugarsync).
#### --sugarsync-refresh-token

View file

@ -245,7 +245,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/swift/swift.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
#### --swift-env-auth
@ -485,7 +485,7 @@ Properties:
### Advanced options
Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
#### --swift-leave-parts-on-error

View file

@ -174,7 +174,7 @@ The policies definition are inspired by [trapexit/mergerfs](https://github.com/t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/union/union.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to union (Union merges the contents of several upstream fs).
Here are the Standard options specific to union (Union merges the contents of several upstream fs).
#### --union-upstreams
@ -235,4 +235,28 @@ Properties:
- Type: int
- Default: 120
### Advanced options
Here are the Advanced options specific to union (Union merges the contents of several upstream fs).
#### --union-min-free-space
Minimum viable free space for lfs/eplfs policies.
If a remote has less than this much free space then it won't be
considered for use in lfs or eplfs policies.
Properties:
- Config: min_free_space
- Env Var: RCLONE_UNION_MIN_FREE_SPACE
- Type: SizeSuffix
- Default: 1Gi
### Metadata
Any metadata supported by the underlying remote is read and written.
See the [metadata](/docs/#metadata) docs for more info.
{{< rem autogenerated options stop >}}

View file

@ -101,7 +101,7 @@ as they can't be used in XML strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/uptobox/uptobox.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to uptobox (Uptobox).
Here are the Standard options specific to uptobox (Uptobox).
#### --uptobox-access-token
@ -118,7 +118,7 @@ Properties:
### Advanced options
Here are the advanced options specific to uptobox (Uptobox).
Here are the Advanced options specific to uptobox (Uptobox).
#### --uptobox-encoding

View file

@ -110,7 +110,7 @@ with them.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/webdav/webdav.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to webdav (Webdav).
Here are the Standard options specific to webdav (WebDAV).
#### --webdav-url
@ -127,7 +127,7 @@ Properties:
#### --webdav-vendor
Name of the Webdav site/service/software you are using.
Name of the WebDAV site/service/software you are using.
Properties:
@ -186,7 +186,7 @@ Properties:
### Advanced options
Here are the advanced options specific to webdav (Webdav).
Here are the Advanced options specific to webdav (WebDAV).
#### --webdav-bearer-token-command

View file

@ -116,7 +116,7 @@ as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/yandex/yandex.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to yandex (Yandex Disk).
Here are the Standard options specific to yandex (Yandex Disk).
#### --yandex-client-id
@ -146,7 +146,7 @@ Properties:
### Advanced options
Here are the advanced options specific to yandex (Yandex Disk).
Here are the Advanced options specific to yandex (Yandex Disk).
#### --yandex-token

View file

@ -127,7 +127,7 @@ from filenames during upload.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/zoho/zoho.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to zoho (Zoho).
Here are the Standard options specific to zoho (Zoho).
#### --zoho-client-id
@ -176,12 +176,16 @@ Properties:
- Europe
- "in"
- India
- "jp"
- Japan
- "com.cn"
- China
- "com.au"
- Australia
### Advanced options
Here are the advanced options specific to zoho (Zoho).
Here are the Advanced options specific to zoho (Zoho).
#### --zoho-token

Some files were not shown because too many files have changed in this diff Show more