diff --git a/MANUAL.html b/MANUAL.html
index 9648c259f..bb9b50815 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -12,7 +12,7 @@
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift:
won't work whereas swift:bucket
will as will swift:bucket/path
. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the EXPERIMENTAL file caching for solutions to make mount mount more reliable.
+File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount mount more reliable.
You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.
The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel.
@@ -935,7 +936,6 @@ umount /path/to/local/mount
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
@@ -1096,31 +1096,16 @@ ffmpeg - | rclone rcat remote:path/to/file
Serve the remote over HTTP.
+Serve remote:path over FTP.
rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
-You can use the filter flags (eg --include, --exclude) to control what is served.
-The server will log errors. Use -v to see access logs.
---bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
---server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
---max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
-Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
-The password file can be updated while rclone is running.
-Use --realm to set the authentication realm.
-By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+You can set a single username and password with the --user and --pass flags.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
@@ -1176,74 +1160,34 @@ htpasswd -B htpasswd anotherUser
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to --low-level-retries times.
-The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
-By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
-Serve remote:path over webdav.
-rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client or you can make a remote of type webdav to read and write it.
-This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.
-If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1".
-Use "rclone hashsum" to see the full list.
-Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
---server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
---max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
-By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
-Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
-The password file can be updated while rclone is running.
-Use --realm to set the authentication realm.
-By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
@@ -1360,8 +1260,191 @@ htpasswd -B htpasswd anotherUser
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to --low-level-retries times.
-Serve the remote for restic's REST API.
+rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
+The server will log errors. Use -v to see access logs.
+--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.
+Where you can replace "backup" in the above by whatever path in the remote you wish to use.
+By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag.
+You might wish to start this server on boot.
+Note that you will need restic 0.8.2 or later to interoperate with rclone.
+For the example above you will want to use "http://localhost:8080/" as the URL for the REST server.
+Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these must end with /. Eg
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+By default this will serve files without needing a login.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+The password file can be updated while rclone is running.
+Use --realm to set the authentication realm.
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+Serve remote:path over webdav.
+rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client or you can make a remote of type webdav to read and write it.
+This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.
+If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1".
+Use "rclone hashsum" to see the full list.
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+By default this will serve files without needing a login.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+The password file can be updated while rclone is running.
+Use --realm to set the authentication realm.
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
+This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
+These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
+Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
+This mode should support all normal file system operations.
+If an upload fails it will be retried up to --low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
+This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
+In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
+This mode should support all normal file system operations.
+If an upload or download fails it will be retried up to --low-level-retries times.
+Changes storage class/tier of objects in remote.
+rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
+Note that, certain tier chages make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true
+Create new file or change file modification time.
-Create new file or change file modification time.
List the contents of the remote in a tree like fashion.
-rclone tree lists the contents of a remote in a similar way to the unix tree command.
You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.
The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.
Rclone has a number of options to control its behaviour.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However, a suffix of b
for bytes, k
for kBytes, M
for MBytes, G
for GBytes, T
for TBytes and P
for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
@@ -1586,6 +1683,8 @@ rclone sync /path/to/files remote:current-backup
Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
Comma separated list of log format options. date
, time
, microseconds
, longfile
, shortfile
, UTC
. The default is "date
,time
".
This sets the log level for rclone. The default log level is NOTICE
.
When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also (eg the Google Drive client).
-This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer.
Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay.
Normally rclone outputs stats and a completion message. If you set this flag it will make as little output as possible.
By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync
operations and perform renaming server-side.
Files will be matched by size and hash - if both match then a rename will be considered.
-If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console.
+If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console. Note: Encrypted destinations are not supported by --track-renames
.
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
Ensure the specified file chunks are cached on disk.
+The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end]
+start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclisive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file.
+Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks
+Any parameter with a key that starts with "file" can be used to specify files to fetch, eg
+File names will automatically be encrypted when the a crypt remote is used on top of the cache.
Show statistics for the cache remote.
Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg
Without any parameter given this returns the current status of the poll-interval setting.
+When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval.
+The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely.
+The new poll-interval value will only be active when the timeout is not reached.
+If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote.
This reads the directories for the specified paths and freshens the directory cache.
If no paths are passed in then it will refresh the root directory.
@@ -2279,6 +2396,7 @@ rclone rc core/bwlimit rate=off
The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl
.
The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable.
If an error occurs then there will be an HTTP error status (usually 400) and the body of the response will contain a JSON encoded error object.
+The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back.
@@ -2985,6 +3103,18 @@ e/n/d/r/c/s/q> q
rclone ls remote:
Copy another local directory to the alias directory called source
rclone copy /home/source remote:source
+
+Standard Options
+Here are the standard options specific to alias (Alias for a existing remote).
+--alias-remote
+Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
+
+- Config: remote
+- Env Var: RCLONE_ALIAS_REMOTE
+- Type: string
+- Default: ""
+
+
Amazon Drive
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.
Status
@@ -3086,17 +3216,75 @@ y/e/d> y
Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.
Let's say you usually use amazon.co.uk
. When you authenticate with rclone it will take you to an amazon.com
page to log in. Your amazon.co.uk
email and password should work here just fine.
-Specific options
-Here are the command line options specific to this cloud storage system.
---acd-templink-threshold=SIZE
-Files this size or more will be downloaded via their tempLink
. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.
-To download files above this threshold, rclone requests a tempLink
which downloads the file through a temporary URL directly from the underlying S3 storage.
---acd-upload-wait-per-gb=TIME
+
+Standard Options
+Here are the standard options specific to amazon cloud drive (Amazon Drive).
+--acd-client-id
+Amazon Application Client ID.
+
+- Config: client_id
+- Env Var: RCLONE_ACD_CLIENT_ID
+- Type: string
+- Default: ""
+
+--acd-client-secret
+Amazon Application Client Secret.
+
+- Config: client_secret
+- Env Var: RCLONE_ACD_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to amazon cloud drive (Amazon Drive).
+--acd-auth-url
+Auth server URL. Leave blank to use Amazon's.
+
+- Config: auth_url
+- Env Var: RCLONE_ACD_AUTH_URL
+- Type: string
+- Default: ""
+
+--acd-token-url
+Token server url. leave blank to use Amazon's.
+
+- Config: token_url
+- Env Var: RCLONE_ACD_TOKEN_URL
+- Type: string
+- Default: ""
+
+--acd-checkpoint
+Checkpoint for internal polling (debug).
+
+- Config: checkpoint
+- Env Var: RCLONE_ACD_CHECKPOINT
+- Type: string
+- Default: ""
+
+--acd-upload-wait-per-gb
+Additional time per GB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.
The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.
You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads of big files for a range of file sizes.
-Upload with the -v
flag to see more info about what rclone is doing in this situation.
+Upload with the "-v" flag to see more info about what rclone is doing in this situation.
+
+- Config: upload_wait_per_gb
+- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
+- Type: Duration
+- Default: 3m0s
+
+--acd-templink-threshold
+Files >= this size will be downloaded via their tempLink.
+Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.
+To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.
+
+- Config: templink_threshold
+- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
+- Type: SizeSuffix
+- Default: 9G
+
+
Limitations
Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
@@ -3327,7 +3515,7 @@ y/e/d>
rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
Buckets and Regions
With Amazon S3 you can list buckets (rclone lsd
) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region
.
-Authentication
+Authentication
There are a number of ways to supply rclone
with a set of AWS credentials, with and without using the environment.
The different authentication methods are tried in this order:
@@ -3399,30 +3587,775 @@ y/e/d>
You can transition objects to glacier storage using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access the data you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to restore the object(s) in question before using rclone.
-Specific options
-Here are the command line options specific to this cloud storage system.
---s3-acl=STRING
-Canned ACL used when creating buckets and/or storing objects in S3.
-For more info visit the canned ACL docs.
---s3-storage-class=STRING
-Storage class to upload new objects with.
-Available options include:
+
+Standard Options
+Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
+--s3-provider
+Choose your S3 provider.
-- STANDARD - default storage class
-- STANDARD_IA - for less frequently accessed data (e.g backups)
-- ONEZONE_IA - for storing data in only one Availability Zone
-- REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy)
+- Config: provider
+- Env Var: RCLONE_S3_PROVIDER
+- Type: string
+- Default: ""
+- Examples:
+
+- "AWS"
+
+- Amazon Web Services (AWS) S3
+
+- "Ceph"
+
+- "DigitalOcean"
+
+- "Dreamhost"
+
+- Dreamhost DreamObjects
+
+- "IBMCOS"
+
+- "Minio"
+
+- "Wasabi"
+
+- "Other"
+
+- Any other S3 compatible provider
+
+
---s3-chunk-size=SIZE
+--s3-env-auth
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+
+- Config: env_auth
+- Env Var: RCLONE_S3_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+
+- "false"
+
+- Enter AWS credentials in the next step
+
+- "true"
+
+- Get AWS credentials from the environment (env vars or IAM)
+
+
+
+--s3-access-key-id
+AWS Access Key ID. Leave blank for anonymous access or runtime credentials.
+
+- Config: access_key_id
+- Env Var: RCLONE_S3_ACCESS_KEY_ID
+- Type: string
+- Default: ""
+
+--s3-secret-access-key
+AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials.
+
+- Config: secret_access_key
+- Env Var: RCLONE_S3_SECRET_ACCESS_KEY
+- Type: string
+- Default: ""
+
+--s3-region
+Region to connect to.
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Type: string
+- Default: ""
+- Examples:
+
+- "us-east-1"
+
+- The default endpoint - a good choice if you are unsure.
+- US Region, Northern Virginia or Pacific Northwest.
+- Leave location constraint empty.
+
+- "us-east-2"
+
+- US East (Ohio) Region
+- Needs location constraint us-east-2.
+
+- "us-west-2"
+
+- US West (Oregon) Region
+- Needs location constraint us-west-2.
+
+- "us-west-1"
+
+- US West (Northern California) Region
+- Needs location constraint us-west-1.
+
+- "ca-central-1"
+
+- Canada (Central) Region
+- Needs location constraint ca-central-1.
+
+- "eu-west-1"
+
+- EU (Ireland) Region
+- Needs location constraint EU or eu-west-1.
+
+- "eu-west-2"
+
+- EU (London) Region
+- Needs location constraint eu-west-2.
+
+- "eu-central-1"
+
+- EU (Frankfurt) Region
+- Needs location constraint eu-central-1.
+
+- "ap-southeast-1"
+
+- Asia Pacific (Singapore) Region
+- Needs location constraint ap-southeast-1.
+
+- "ap-southeast-2"
+
+- Asia Pacific (Sydney) Region
+- Needs location constraint ap-southeast-2.
+
+- "ap-northeast-1"
+
+- Asia Pacific (Tokyo) Region
+- Needs location constraint ap-northeast-1.
+
+- "ap-northeast-2"
+
+- Asia Pacific (Seoul)
+- Needs location constraint ap-northeast-2.
+
+- "ap-south-1"
+
+- Asia Pacific (Mumbai)
+- Needs location constraint ap-south-1.
+
+- "sa-east-1"
+
+- South America (Sao Paulo) Region
+- Needs location constraint sa-east-1.
+
+
+
+--s3-region
+Region to connect to. Leave blank if you are using an S3 clone and you don't have a region.
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Type: string
+- Default: ""
+- Examples:
+
+- ""
+
+- Use this if unsure. Will use v4 signatures and an empty region.
+
+- "other-v2-signature"
+
+- Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+
+
+
+--s3-endpoint
+Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+
+--s3-endpoint
+Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+- Examples:
+
+- "s3-api.us-geo.objectstorage.softlayer.net"
+
+- US Cross Region Endpoint
+
+- "s3-api.dal.us-geo.objectstorage.softlayer.net"
+
+- US Cross Region Dallas Endpoint
+
+- "s3-api.wdc-us-geo.objectstorage.softlayer.net"
+
+- US Cross Region Washington DC Endpoint
+
+- "s3-api.sjc-us-geo.objectstorage.softlayer.net"
+
+- US Cross Region San Jose Endpoint
+
+- "s3-api.us-geo.objectstorage.service.networklayer.com"
+
+- US Cross Region Private Endpoint
+
+- "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
+
+- US Cross Region Dallas Private Endpoint
+
+- "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
+
+- US Cross Region Washington DC Private Endpoint
+
+- "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
+
+- US Cross Region San Jose Private Endpoint
+
+- "s3.us-east.objectstorage.softlayer.net"
+
+- US Region East Endpoint
+
+- "s3.us-east.objectstorage.service.networklayer.com"
+
+- US Region East Private Endpoint
+
+- "s3.us-south.objectstorage.softlayer.net"
+
+- US Region South Endpoint
+
+- "s3.us-south.objectstorage.service.networklayer.com"
+
+- US Region South Private Endpoint
+
+- "s3.eu-geo.objectstorage.softlayer.net"
+
+- EU Cross Region Endpoint
+
+- "s3.fra-eu-geo.objectstorage.softlayer.net"
+
+- EU Cross Region Frankfurt Endpoint
+
+- "s3.mil-eu-geo.objectstorage.softlayer.net"
+
+- EU Cross Region Milan Endpoint
+
+- "s3.ams-eu-geo.objectstorage.softlayer.net"
+
+- EU Cross Region Amsterdam Endpoint
+
+- "s3.eu-geo.objectstorage.service.networklayer.com"
+
+- EU Cross Region Private Endpoint
+
+- "s3.fra-eu-geo.objectstorage.service.networklayer.com"
+
+- EU Cross Region Frankfurt Private Endpoint
+
+- "s3.mil-eu-geo.objectstorage.service.networklayer.com"
+
+- EU Cross Region Milan Private Endpoint
+
+- "s3.ams-eu-geo.objectstorage.service.networklayer.com"
+
+- EU Cross Region Amsterdam Private Endpoint
+
+- "s3.eu-gb.objectstorage.softlayer.net"
+
+- "s3.eu-gb.objectstorage.service.networklayer.com"
+
+- Great Britan Private Endpoint
+
+- "s3.ap-geo.objectstorage.softlayer.net"
+
+- APAC Cross Regional Endpoint
+
+- "s3.tok-ap-geo.objectstorage.softlayer.net"
+
+- APAC Cross Regional Tokyo Endpoint
+
+- "s3.hkg-ap-geo.objectstorage.softlayer.net"
+
+- APAC Cross Regional HongKong Endpoint
+
+- "s3.seo-ap-geo.objectstorage.softlayer.net"
+
+- APAC Cross Regional Seoul Endpoint
+
+- "s3.ap-geo.objectstorage.service.networklayer.com"
+
+- APAC Cross Regional Private Endpoint
+
+- "s3.tok-ap-geo.objectstorage.service.networklayer.com"
+
+- APAC Cross Regional Tokyo Private Endpoint
+
+- "s3.hkg-ap-geo.objectstorage.service.networklayer.com"
+
+- APAC Cross Regional HongKong Private Endpoint
+
+- "s3.seo-ap-geo.objectstorage.service.networklayer.com"
+
+- APAC Cross Regional Seoul Private Endpoint
+
+- "s3.mel01.objectstorage.softlayer.net"
+
+- Melbourne Single Site Endpoint
+
+- "s3.mel01.objectstorage.service.networklayer.com"
+
+- Melbourne Single Site Private Endpoint
+
+- "s3.tor01.objectstorage.softlayer.net"
+
+- Toronto Single Site Endpoint
+
+- "s3.tor01.objectstorage.service.networklayer.com"
+
+- Toronto Single Site Private Endpoint
+
+
+
+--s3-endpoint
+Endpoint for S3 API. Required when using an S3 clone.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+- Examples:
+
+- "objects-us-west-1.dream.io"
+
+- Dream Objects endpoint
+
+- "nyc3.digitaloceanspaces.com"
+
+- Digital Ocean Spaces New York 3
+
+- "ams3.digitaloceanspaces.com"
+
+- Digital Ocean Spaces Amsterdam 3
+
+- "sgp1.digitaloceanspaces.com"
+
+- Digital Ocean Spaces Singapore 1
+
+- "s3.wasabisys.com"
+
+
+
+--s3-location-constraint
+Location constraint - must be set to match the Region. Used when creating buckets only.
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+- Examples:
+
+- ""
+
+- Empty for US Region, Northern Virginia or Pacific Northwest.
+
+- "us-east-2"
+
+- US East (Ohio) Region.
+
+- "us-west-2"
+
+- US West (Oregon) Region.
+
+- "us-west-1"
+
+- US West (Northern California) Region.
+
+- "ca-central-1"
+
+- Canada (Central) Region.
+
+- "eu-west-1"
+
+- "eu-west-2"
+
+- "EU"
+
+- "ap-southeast-1"
+
+- Asia Pacific (Singapore) Region.
+
+- "ap-southeast-2"
+
+- Asia Pacific (Sydney) Region.
+
+- "ap-northeast-1"
+
+- Asia Pacific (Tokyo) Region.
+
+- "ap-northeast-2"
+
+- "ap-south-1"
+
+- "sa-east-1"
+
+- South America (Sao Paulo) Region.
+
+
+
+--s3-location-constraint
+Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+- Examples:
+
+- "us-standard"
+
+- US Cross Region Standard
+
+- "us-vault"
+
+- "us-cold"
+
+- "us-flex"
+
+- "us-east-standard"
+
+- US East Region Standard
+
+- "us-east-vault"
+
+- "us-east-cold"
+
+- "us-east-flex"
+
+- "us-south-standard"
+
+- US Sout hRegion Standard
+
+- "us-south-vault"
+
+- "us-south-cold"
+
+- "us-south-flex"
+
+- "eu-standard"
+
+- EU Cross Region Standard
+
+- "eu-vault"
+
+- "eu-cold"
+
+- "eu-flex"
+
+- "eu-gb-standard"
+
+- "eu-gb-vault"
+
+- "eu-gb-cold"
+
+- "eu-gb-flex"
+
+- "ap-standard"
+
+- "ap-vault"
+
+- "ap-cold"
+
+- "ap-flex"
+
+- "mel01-standard"
+
+- "mel01-vault"
+
+- "mel01-cold"
+
+- "mel01-flex"
+
+- "tor01-standard"
+
+- "tor01-vault"
+
+- "tor01-cold"
+
+- "tor01-flex"
+
+
+
+--s3-location-constraint
+Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only.
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+
+--s3-acl
+Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+
+- Config: acl
+- Env Var: RCLONE_S3_ACL
+- Type: string
+- Default: ""
+- Examples:
+
+- "private"
+
+- Owner gets FULL_CONTROL. No one else has access rights (default).
+
+- "public-read"
+
+- Owner gets FULL_CONTROL. The AllUsers group gets READ access.
+
+- "public-read-write"
+
+- Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
+- Granting this on a bucket is generally not recommended.
+
+- "authenticated-read"
+
+- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
+
+- "bucket-owner-read"
+
+- Object owner gets FULL_CONTROL. Bucket owner gets READ access.
+- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+
+- "bucket-owner-full-control"
+
+- Both the object owner and the bucket owner get FULL_CONTROL over the object.
+- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+
+- "private"
+
+- Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
+
+- "public-read"
+
+- Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
+
+- "public-read-write"
+
+- Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
+
+- "authenticated-read"
+
+- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
+
+
+
+--s3-server-side-encryption
+The server-side encryption algorithm used when storing this object in S3.
+
+- Config: server_side_encryption
+- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
+- Type: string
+- Default: ""
+- Examples:
+
+- ""
+
+- "AES256"
+
+- "aws:kms"
+
+
+
+--s3-sse-kms-key-id
+If using KMS ID you must provide the ARN of Key.
+
+- Config: sse_kms_key_id
+- Env Var: RCLONE_S3_SSE_KMS_KEY_ID
+- Type: string
+- Default: ""
+- Examples:
+
+- ""
+
+- "arn:aws:kms:us-east-1:*"
+
+
+
+--s3-storage-class
+The storage class to use when storing new objects in S3.
+
+- Config: storage_class
+- Env Var: RCLONE_S3_STORAGE_CLASS
+- Type: string
+- Default: ""
+- Examples:
+
+- ""
+
+- "STANDARD"
+
+- Standard storage class
+
+- "REDUCED_REDUNDANCY"
+
+- Reduced redundancy storage class
+
+- "STANDARD_IA"
+
+- Standard Infrequent Access storage class
+
+- "ONEZONE_IA"
+
+- One Zone Infrequent Access storage class
+
+
+
+Advanced Options
+Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
+--s3-chunk-size
+Chunk size to use for uploading.
Any files larger than this will be uploaded in chunks of this size. The default is 5MB. The minimum is 5MB.
-Note that 2 chunks of this size are buffered in memory per transfer.
+Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer.
If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.
---s3-force-path-style=BOOL
-If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info.
-Some providers (eg Aliyun OSS or Netease COS) require this set to false
. It can also be set in the config in the advanced section.
+
+- Config: chunk_size
+- Env Var: RCLONE_S3_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5M
+
+--s3-disable-checksum
+Don't store MD5 checksum with object metadata
+
+- Config: disable_checksum
+- Env Var: RCLONE_S3_DISABLE_CHECKSUM
+- Type: bool
+- Default: false
+
+--s3-session-token
+An AWS session token
+
+- Config: session_token
+- Env Var: RCLONE_S3_SESSION_TOKEN
+- Type: string
+- Default: ""
+
--s3-upload-concurrency
-Number of chunks of the same file that are uploaded concurrently. Default is 2.
-If you are uploading small amount of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+Concurrency for multipart uploads.
+This is the number of chunks of the same file that are uploaded concurrently.
+If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+
+- Config: upload_concurrency
+- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 2
+
+--s3-force-path-style
+If true use path style access if false use virtual hosted style.
+If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info.
+Some providers (eg Aliyun OSS or Netease COS) require this set to false.
+
+- Config: force_path_style
+- Env Var: RCLONE_S3_FORCE_PATH_STYLE
+- Type: bool
+- Default: true
+
+--s3-v2-auth
+If true use v2 authentication.
+If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication.
+Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+
+- Config: v2_auth
+- Env Var: RCLONE_S3_V2_AUTH
+- Type: bool
+- Default: false
+
+
Anonymous access to public buckets
If you want to use rclone to access a public bucket, configure with a blank access_key_id
and secret_access_key
. Your config should end up looking like this:
[anons3]
@@ -3984,6 +4917,7 @@ y/e/d> y
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
Old versions of files, where available, are visible using the --b2-versions
flag.
If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket
command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff
.
+Note that cleanup
does not remove partially uploaded files from the bucket.
When you purge
a bucket, the current and the old versions will be deleted then the bucket will be deleted.
However delete
will cause the current versions of the files to become hidden old versions.
Here is a session showing the listing and retrieval of an old version followed by a cleanup
of the old versions.
@@ -4025,24 +4959,8 @@ $ rclone -q --b2-versions ls b2:cleanup-test
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
-Specific options
-Here are the command line options specific to this cloud storage system.
---b2-chunk-size valuee=SIZE
-When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of --transfers
chunks in progress at once. 5,000,000 Bytes is the minimim size (default 96M).
---b2-upload-cutoff=SIZE
-Cutoff for switching to chunked upload (default 190.735 MiB == 200 MB). Files above this size will be uploaded in chunks of --b2-chunk-size
.
-This value should be set no larger than 4.657GiB (== 5GB) as this is the largest file size that can be uploaded.
---b2-test-mode=FLAG
-This is for debugging purposes only.
-Setting FLAG to one of the strings below will cause b2 to return specific errors for debugging purposes.
-
-fail_some_uploads
-expire_some_account_authorization_tokens
-force_cap_exceeded
-
-These will be set in the X-Bz-Test-Mode
header which is documented in the b2 integrations checklist.
---b2-versions
-When set rclone will show and act on older versions of files. For example
+Versions
+Versions can be viewd with the --b2-versions
flag. When it is set rclone will show and act on older versions of files. For example
Listing without --b2-versions
$ rclone -q ls b2:cleanup-test
9 one.txt
@@ -4054,6 +4972,86 @@ $ rclone -q --b2-versions ls b2:cleanup-test
15 one-v2016-07-02-155621-000.txt
Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.
Note that when using --b2-versions
no file write operations are permitted, so you can't upload files or delete them.
+
+Standard Options
+Here are the standard options specific to b2 (Backblaze B2).
+--b2-account
+Account ID or Application Key ID
+
+- Config: account
+- Env Var: RCLONE_B2_ACCOUNT
+- Type: string
+- Default: ""
+
+--b2-key
+Application Key
+
+- Config: key
+- Env Var: RCLONE_B2_KEY
+- Type: string
+- Default: ""
+
+--b2-hard-delete
+Permanently delete files on remote removal, otherwise hide files.
+
+- Config: hard_delete
+- Env Var: RCLONE_B2_HARD_DELETE
+- Type: bool
+- Default: false
+
+Advanced Options
+Here are the advanced options specific to b2 (Backblaze B2).
+--b2-endpoint
+Endpoint for the service. Leave blank normally.
+
+- Config: endpoint
+- Env Var: RCLONE_B2_ENDPOINT
+- Type: string
+- Default: ""
+
+--b2-test-mode
+A flag string for X-Bz-Test-Mode header for debugging.
+This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors:
+
+- "fail_some_uploads"
+- "expire_some_account_authorization_tokens"
+- "force_cap_exceeded"
+
+These will be set in the "X-Bz-Test-Mode" header which is documented in the b2 integrations checklist.
+
+- Config: test_mode
+- Env Var: RCLONE_B2_TEST_MODE
+- Type: string
+- Default: ""
+
+--b2-versions
+Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can't upload files or delete them.
+
+- Config: versions
+- Env Var: RCLONE_B2_VERSIONS
+- Type: bool
+- Default: false
+
+--b2-upload-cutoff
+Cutoff for switching to chunked upload.
+Files above this size will be uploaded in chunks of "--b2-chunk-size".
+This value should be set no larger than 4.657GiB (== 5GB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_B2_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200M
+
+--b2-chunk-size
+Upload chunk size. Must fit in memory.
+When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimim size.
+
+- Config: chunk_size
+- Env Var: RCLONE_B2_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 96M
+
+
Box
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
@@ -4215,12 +5213,44 @@ y/e/d> y
For files above 50MB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing --transfers
will increase memory use.
Deleting files
Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash.
-Specific options
-Here are the command line options specific to this cloud storage system.
---box-upload-cutoff=SIZE
-Cutoff for switching to chunked upload - must be >= 50MB. The default is 50MB.
---box-commit-retries int
-Max number of times to try committing a multipart file. (default 100)
+
+Standard Options
+Here are the standard options specific to box (Box).
+--box-client-id
+Box App Client Id. Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_BOX_CLIENT_ID
+- Type: string
+- Default: ""
+
+--box-client-secret
+Box App Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_BOX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to box (Box).
+--box-upload-cutoff
+Cutoff for switching to multipart upload (>= 50MB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 50M
+
+--box-commit-retries
+Max number of times to try committing a multipart file.
+
+- Config: commit_retries
+- Env Var: RCLONE_BOX_COMMIT_RETRIES
+- Type: int
+- Default: 100
+
+
Limitations
Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Box file names can't have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
.
@@ -4323,7 +5353,7 @@ chunk_total_size = 10G
Once the move is complete the file is unlocked for modifications as it becomes as any other regular file
If the file is being read through cache
when it's actually deleted from the temporary path then cache
will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)
-Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts and even purges of the cache.
+Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the --cache-db-purge
flag.
Write Support
Writes are supported through cache
. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote. Consider using Offline uploading
for reliable writes.
One special case is covered with cache-writes
which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished.
@@ -4338,6 +5368,14 @@ chunk_total_size = 10G
Note: If Plex options are not configured, cache
will function with its configured options without adapting any of its settings.
How to enable? Run rclone config
and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled.
Affected settings: - cache-workers
: Configured value during confirmed playback or 1 all the other times
+Certificate Validation
+When the Plex server is configured to only accept secure connections, it is possible to use .plex.direct
URL's to ensure certificate validation succeeds. These URL's are used by Plex internally to connect to the Plex server securely.
+The format for this URL's is the following:
+https://ip-with-dots-replaced.server-hash.plex.direct:32400/
+The ip-with-dots-replaced
part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1
becomes 127-0-0-1
.
+To get the server-hash
part, the easiest way is to visit
+https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
+This page will list all the available Plex servers for your account with at least one .plex.direct
link for each. Copy one URL and replace the IP address with the desired address. This can be used as the plex_url
value.
Known issues
Mount and --dir-cache-time
--dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache
backend, it will manage its own entries based on the configured time.
@@ -4364,71 +5402,255 @@ chunk_total_size = 10G
One common scenario is to keep your data encrypted in the cloud provider using the crypt
remote. crypt
uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.
There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache
During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yelds better results: cloud remote -> cache -> crypt
+absolute remote paths
+cache
can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the remote
config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading /
character.
+This behavior is irrelevant for most backend types, but there are backends where a leading /
changes the effective directory, e.g. in the sftp
backend paths starting with a /
are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin
and sftp:/bin
will share the same cache folder, even if they represent a different directory on the SSH server.
Cache and Remote Control (--rc)
Cache supports the new --rc
mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag.
rc cache/expire
Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)
-Specific options
-Here are the command line options specific to this cloud storage system.
---cache-db-path=PATH
-Path to where the file structure metadata (DB) is stored locally. The remote name is used as the DB file name.
-Default: /cache-backend/ Example: /.cache/cache-backend/test-cache
---cache-chunk-path=PATH
-Path to where partial file data (chunks) is stored locally. The remote name is appended to the final path.
-This config follows the --cache-db-path
. If you specify a custom location for --cache-db-path
and don't specify one for --cache-chunk-path
then --cache-chunk-path
will use the same path as --cache-db-path
.
-Default: /cache-backend/ Example: /.cache/cache-backend/test-cache
+
+Standard Options
+Here are the standard options specific to cache (Cache a remote).
+--cache-remote
+Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
+
+- Config: remote
+- Env Var: RCLONE_CACHE_REMOTE
+- Type: string
+- Default: ""
+
+--cache-plex-url
+The URL of the Plex server
+
+- Config: plex_url
+- Env Var: RCLONE_CACHE_PLEX_URL
+- Type: string
+- Default: ""
+
+--cache-plex-username
+The username of the Plex user
+
+- Config: plex_username
+- Env Var: RCLONE_CACHE_PLEX_USERNAME
+- Type: string
+- Default: ""
+
+--cache-plex-password
+The password of the Plex user
+
+- Config: plex_password
+- Env Var: RCLONE_CACHE_PLEX_PASSWORD
+- Type: string
+- Default: ""
+
+--cache-chunk-size
+The size of a chunk (partial file data).
+Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.
+
+- Config: chunk_size
+- Env Var: RCLONE_CACHE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5M
+- Examples:
+
+- "1m"
+
+- "5M"
+
+- "10M"
+
+
+
+--cache-info-age
+How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time.
+
+- Config: info_age
+- Env Var: RCLONE_CACHE_INFO_AGE
+- Type: Duration
+- Default: 6h0m0s
+- Examples:
+
+- "1h"
+
+- "24h"
+
+- "48h"
+
+
+
+--cache-chunk-total-size
+The total size that the chunks can take up on the local disk.
+If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.
+
+- Config: chunk_total_size
+- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
+- Type: SizeSuffix
+- Default: 10G
+- Examples:
+
+- "500M"
+
+- "1G"
+
+- "10G"
+
+
+
+Advanced Options
+Here are the advanced options specific to cache (Cache a remote).
+--cache-plex-token
+The plex token for authentication - auto set normally
+
+- Config: plex_token
+- Env Var: RCLONE_CACHE_PLEX_TOKEN
+- Type: string
+- Default: ""
+
+--cache-plex-insecure
+Skip all certificate verifications when connecting to the Plex server
+
+- Config: plex_insecure
+- Env Var: RCLONE_CACHE_PLEX_INSECURE
+- Type: string
+- Default: ""
+
+--cache-db-path
+Directory to store file structure metadata DB. The remote name is used as the DB file name.
+
+- Config: db_path
+- Env Var: RCLONE_CACHE_DB_PATH
+- Type: string
+- Default: "/home/ncw/.cache/rclone/cache-backend"
+
+--cache-chunk-path
+Directory to cache chunk files.
+Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path.
+This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path".
+
+- Config: chunk_path
+- Env Var: RCLONE_CACHE_CHUNK_PATH
+- Type: string
+- Default: "/home/ncw/.cache/rclone/cache-backend"
+
--cache-db-purge
-Flag to clear all the cached data for this remote before.
-Default: not set
---cache-chunk-size=SIZE
-The size of a chunk (partial file data). Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.
-Default: 5M
---cache-total-chunk-size=SIZE
-The total size that the chunks can take up on the local disk. If cache
exceeds this value then it will start to the delete the oldest chunks until it goes under this value.
-Default: 10G
---cache-chunk-clean-interval=DURATION
-How often should cache
perform cleanups of the chunk storage. The default value should be ok for most people. If you find that cache
goes over cache-total-chunk-size
too often then try to lower this value to force it to perform cleanups more often.
-Default: 1m
---cache-info-age=DURATION
-How long to keep file structure information (directory listings, file size, mod times etc) locally.
-If all write operations are done through cache
then you can safely make this value very large as the cache store will also be updated in real time.
-Default: 6h
---cache-read-retries=RETRIES
+Clear all the cached data for this remote on start.
+
+- Config: db_purge
+- Env Var: RCLONE_CACHE_DB_PURGE
+- Type: bool
+- Default: false
+
+--cache-chunk-clean-interval
+How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often.
+
+- Config: chunk_clean_interval
+- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
+- Type: Duration
+- Default: 1m0s
+
+--cache-read-retries
How many times to retry a read from a cache storage.
-Since reading from a cache
stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache
isn't able to provide file data anymore.
+Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore.
For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering.
-Default: 10
---cache-workers=WORKERS
+
+- Config: read_retries
+- Env Var: RCLONE_CACHE_READ_RETRIES
+- Type: int
+- Default: 10
+
+--cache-workers
How many workers should run in parallel to download chunks.
Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers.
-Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use. Default: 4
+Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use.
+
+- Config: workers
+- Env Var: RCLONE_CACHE_WORKERS
+- Type: int
+- Default: 4
+
--cache-chunk-no-memory
-By default, cache
will keep file data during streaming in RAM as well to provide it to readers as fast as possible.
-This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like cache-chunk-size
and cache-workers
this footprint can increase if there are parallel streams too (multiple files being read at the same time).
+Disable the in-memory cache for storing chunks during streaming.
+By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible.
+This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time).
If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.
-Default: not set
---cache-rps=NUMBER
-This setting places a hard limit on the number of requests per second that cache
will be doing to the cloud provider remote and try to respect that value by setting waits between reads.
+
+- Config: chunk_no_memory
+- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
+- Type: bool
+- Default: false
+
+--cache-rps
+Limits the number of requests per second to the source FS (-1 to disable)
+This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads.
If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.
A good balance of all the other settings should make this setting useless but it is available to set for more special cases.
NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass.
-Default: disabled
+
+- Config: rps
+- Env Var: RCLONE_CACHE_RPS
+- Type: int
+- Default: -1
+
--cache-writes
-If you need to read files immediately after you upload them through cache
you can enable this flag to have their data stored in the cache store at the same time during upload.
-Default: not set
---cache-tmp-upload-path=PATH
-This is the path where cache
will use as a temporary storage for new files that need to be uploaded to the cloud provider.
+Cache file data on writes through the FS
+If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload.
+
+- Config: writes
+- Env Var: RCLONE_CACHE_WRITES
+- Type: bool
+- Default: false
+
+--cache-tmp-upload-path
+Directory to keep temporary files until they are uploaded.
+This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider.
Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider
-Default: empty
---cache-tmp-wait-time=DURATION
+
+- Config: tmp_upload_path
+- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
+- Type: string
+- Default: ""
+
+--cache-tmp-wait-time
+How long should files be stored in local cache before being uploaded
This is the duration that a file must wait in the temporary location cache-tmp-upload-path before it is selected for upload.
Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose.
-Default: 15m
---cache-db-wait-time=DURATION
+
+- Config: tmp_wait_time
+- Env Var: RCLONE_CACHE_TMP_WAIT_TIME
+- Type: Duration
+- Default: 15s
+
+--cache-db-wait-time
+How long to wait for the DB to be available - 0 is unlimited
Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error.
If you set it to 0 then it will wait forever.
-Default: 1s
+
+- Config: db_wait_time
+- Env Var: RCLONE_CACHE_DB_WAIT_TIME
+- Type: Duration
+- Default: 1s
+
+
Crypt
The crypt
remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
@@ -4613,11 +5835,88 @@ $ rclone -q ls secret:
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
Note that you should use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can't check the checksums properly.
-Specific options
-Here are the command line options specific to this cloud storage system.
+
+Standard Options
+Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
+--crypt-remote
+Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
+
+- Config: remote
+- Env Var: RCLONE_CRYPT_REMOTE
+- Type: string
+- Default: ""
+
+--crypt-filename-encryption
+How to encrypt the filenames.
+
+- Config: filename_encryption
+- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
+- Type: string
+- Default: "standard"
+- Examples:
+
+- "off"
+
+- Don't encrypt the file names. Adds a ".bin" extension only.
+
+- "standard"
+
+- Encrypt the filenames see the docs for the details.
+
+- "obfuscate"
+
+- Very simple filename obfuscation.
+
+
+
+--crypt-directory-name-encryption
+Option to either encrypt directory names or leave them intact.
+
+- Config: directory_name_encryption
+- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
+- Type: bool
+- Default: true
+- Examples:
+
+- "true"
+
+- Encrypt directory names.
+
+- "false"
+
+- Don't encrypt directory names, leave them intact.
+
+
+
+--crypt-password
+Password or pass phrase for encryption.
+
+- Config: password
+- Env Var: RCLONE_CRYPT_PASSWORD
+- Type: string
+- Default: ""
+
+--crypt-password2
+Password or pass phrase for salt. Optional but recommended. Should be different to the previous password.
+
+- Config: password2
+- Env Var: RCLONE_CRYPT_PASSWORD2
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
--crypt-show-mapping
+For all files listed show how the names encrypt.
If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.
This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes.
+
+- Config: show_mapping
+- Env Var: RCLONE_CRYPT_SHOW_MAPPING
+- Type: bool
+- Default: false
+
+
Backing up a crypted remote
If you wish to backup a crypted remote, it it recommended that you use rclone sync
on the encrypted files, and make sure the passwords are the same in the new encrypted remote.
This will have the following advantages
@@ -4758,11 +6057,38 @@ y/e/d> y
Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.
This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only
or --checksum
flag to stop it.
Dropbox supports its own hash type which is checked for all transfers.
-Specific options
-Here are the command line options specific to this cloud storage system.
---dropbox-chunk-size=SIZE
-Any files larger than this will be uploaded in chunks of this size. The default is 48MB. The maximum is 150MB.
+
+Standard Options
+Here are the standard options specific to dropbox (Dropbox).
+--dropbox-client-id
+Dropbox App Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_DROPBOX_CLIENT_ID
+- Type: string
+- Default: ""
+
+--dropbox-client-secret
+Dropbox App Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_DROPBOX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to dropbox (Dropbox).
+--dropbox-chunk-size
+Upload chunk size. (< 150M).
+Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.
+
+- Config: chunk_size
+- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 48M
+
+
Limitations
Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are some file names such as thumbs.db
which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempts to upload one of those file names, but the sync won't fail.
@@ -4853,6 +6179,49 @@ y/e/d> y
FTP does not support modified times. Any times you see on the server will be time of upload.
Checksums
FTP does not support any checksums.
+
+Standard Options
+Here are the standard options specific to ftp (FTP Connection).
+--ftp-host
+FTP host to connect to
+
+- Config: host
+- Env Var: RCLONE_FTP_HOST
+- Type: string
+- Default: ""
+- Examples:
+
+- "ftp.example.com"
+
+- Connect to ftp.example.com
+
+
+
+--ftp-user
+FTP username, leave blank for current username, ncw
+
+- Config: user
+- Env Var: RCLONE_FTP_USER
+- Type: string
+- Default: ""
+
+--ftp-port
+FTP port, leave blank to use default (21)
+
+- Config: port
+- Env Var: RCLONE_FTP_PORT
+- Type: string
+- Default: ""
+
+--ftp-pass
+FTP password
+
+- Config: pass
+- Env Var: RCLONE_FTP_PASS
+- Type: string
+- Default: ""
+
+
Limitations
Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
@@ -5023,6 +6392,218 @@ y/e/d> y
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Modified time
Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.
+
+Standard Options
+Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
+--gcs-client-id
+Google Application Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_GCS_CLIENT_ID
+- Type: string
+- Default: ""
+
+--gcs-client-secret
+Google Application Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_GCS_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+--gcs-project-number
+Project number. Optional - needed only for list/create/delete buckets - see your developer console.
+
+- Config: project_number
+- Env Var: RCLONE_GCS_PROJECT_NUMBER
+- Type: string
+- Default: ""
+
+--gcs-service-account-file
+Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.
+
+- Config: service_account_file
+- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
+- Type: string
+- Default: ""
+
+--gcs-service-account-credentials
+Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.
+
+- Config: service_account_credentials
+- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
+- Type: string
+- Default: ""
+
+--gcs-object-acl
+Access Control List for new objects.
+
+- Config: object_acl
+- Env Var: RCLONE_GCS_OBJECT_ACL
+- Type: string
+- Default: ""
+- Examples:
+
+- "authenticatedRead"
+
+- Object owner gets OWNER access, and all Authenticated Users get READER access.
+
+- "bucketOwnerFullControl"
+
+- Object owner gets OWNER access, and project team owners get OWNER access.
+
+- "bucketOwnerRead"
+
+- Object owner gets OWNER access, and project team owners get READER access.
+
+- "private"
+
+- Object owner gets OWNER access [default if left blank].
+
+- "projectPrivate"
+
+- Object owner gets OWNER access, and project team members get access according to their roles.
+
+- "publicRead"
+
+- Object owner gets OWNER access, and all Users get READER access.
+
+
+
+--gcs-bucket-acl
+Access Control List for new buckets.
+
+- Config: bucket_acl
+- Env Var: RCLONE_GCS_BUCKET_ACL
+- Type: string
+- Default: ""
+- Examples:
+
+- "authenticatedRead"
+
+- Project team owners get OWNER access, and all Authenticated Users get READER access.
+
+- "private"
+
+- Project team owners get OWNER access [default if left blank].
+
+- "projectPrivate"
+
+- Project team members get access according to their roles.
+
+- "publicRead"
+
+- Project team owners get OWNER access, and all Users get READER access.
+
+- "publicReadWrite"
+
+- Project team owners get OWNER access, and all Users get WRITER access.
+
+
+
+--gcs-location
+Location for the newly created buckets.
+
+- Config: location
+- Env Var: RCLONE_GCS_LOCATION
+- Type: string
+- Default: ""
+- Examples:
+
+- ""
+
+- Empty for default location (US).
+
+- "asia"
+
+- Multi-regional location for Asia.
+
+- "eu"
+
+- Multi-regional location for Europe.
+
+- "us"
+
+- Multi-regional location for United States.
+
+- "asia-east1"
+
+- "asia-northeast1"
+
+- "asia-southeast1"
+
+- "australia-southeast1"
+
+- "europe-west1"
+
+- "europe-west2"
+
+- "us-central1"
+
+- "us-east1"
+
+- "us-east4"
+
+- "us-west1"
+
+
+
+--gcs-storage-class
+The storage class to use when storing objects in Google Cloud Storage.
+
+- Config: storage_class
+- Env Var: RCLONE_GCS_STORAGE_CLASS
+- Type: string
+- Default: ""
+- Examples:
+
+- ""
+
+- "MULTI_REGIONAL"
+
+- Multi-regional storage class
+
+- "REGIONAL"
+
+- Regional storage class
+
+- "NEARLINE"
+
+- Nearline storage class
+
+- "COLDLINE"
+
+- Coldline storage class
+
+- "DURABLE_REDUCED_AVAILABILITY"
+
+- Durable reduced availability storage class
+
+
+
+
Google Drive
Paths are specified as drive:path
Drive paths may be as deep as required, eg drive:directory/subdirectory
.
@@ -5246,23 +6827,80 @@ trashed=false and 'c' in parents
If you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This command does not take any path arguments.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other Google services such as Gmail. This command does not take any path arguments.
-Specific options
-Here are the command line options specific to this cloud storage system.
---drive-acknowledge-abuse
-If downloading a file returns the error This file has been identified as malware or spam and cannot be downloaded
with the error code cannotDownloadAbusiveFile
then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.
---drive-auth-owner-only
-Only consider files owned by the authenticated user.
---drive-chunk-size=SIZE
-Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
-Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
-Reducing this will reduce memory usage but decrease performance.
-
-Google documents can only be exported from Google drive. When rclone downloads a Google doc it chooses a format to download depending upon this setting.
-By default the formats are docx,xlsx,pptx,svg
which are a sensible default for an editable document.
+Import/Export of google documents
+Google documents can be exported from and uploaded to Google Drive.
+When rclone downloads a Google doc it chooses a format to download depending upon the --drive-export-formats
setting. By default the export formats are docx,xlsx,pptx,svg
which are a sensible default for an editable document.
When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.
-If you prefer an archive copy then you might use --drive-formats pdf
, or if you prefer openoffice/libreoffice formats you might use --drive-formats ods,odt,odp
.
+If you prefer an archive copy then you might use --drive-export-formats pdf
, or if you prefer openoffice/libreoffice formats you might use --drive-export-formats ods,odt,odp
.
Note that rclone adds the extension to the google doc, so if it is calles My Spreadsheet
on google docs, it will be exported as My Spreadsheet.xlsx
or My Spreadsheet.pdf
etc.
-Here are the possible extensions with their corresponding mime types.
+When importing files into Google Drive, rclone will conververt all files with an extension in --drive-import-formats
to their associated document type. rclone will not convert any files by default, since the conversion is lossy process.
+The conversion must result in a file with the same extension when the --drive-export-formats
rules are applied to the uploded document.
+Here are some examples for allowed and prohibited conversions.
+
+
+
+
+
+
+odt |
+odt |
+odt |
+odt |
+Yes |
+
+
+odt |
+docx,odt |
+odt |
+odt |
+Yes |
+
+
+ |
+docx |
+docx |
+docx |
+Yes |
+
+
+ |
+odt |
+odt |
+docx |
+No |
+
+
+odt,docx |
+docx,odt |
+docx |
+odt |
+No |
+
+
+docx,odt |
+docx,odt |
+docx |
+docx |
+Yes |
+
+
+docx,odt |
+docx,odt |
+odt |
+docx |
+No |
+
+
+
+This limitation can be disabled by specifying --drive-allow-import-name-change
. When using this flag, rclone can convert multiple files types resulting in the same document type at once, eg with --drive-import-formats docx,odt,txt
, all files having these extension would result in a doument represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the file again or delete them when the name changes.
+Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not listed here. Some of these additional ones might only be available when the operating system provides the correct MIME type entries.
+This list can be changed by Google Drive at any time and might not represent the currently available converions.
@@ -5283,30 +6921,30 @@ trashed=false and 'c' in parents
Standard CSV format for Spreadsheets |
-doc |
-application/msword |
-Micosoft Office Document |
-
-
docx |
application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Microsoft Office Document |
-
+
epub |
application/epub+zip |
E-book format |
-
+
html |
text/html |
An HTML Document |
-
+
jpg |
image/jpeg |
A JPEG Image File |
+
+json |
+application/vnd.google-apps.script+json |
+JSON Text Format |
+
odp |
application/vnd.oasis.opendocument.presentation |
@@ -5363,48 +7001,293 @@ trashed=false and 'c' in parents
Plain Text |
-xls |
-application/vnd.ms-excel |
-Microsoft Office Spreadsheet |
-
-
xlsx |
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet |
Microsoft Office Spreadsheet |
-
+
zip |
application/zip |
A ZIP file of HTML, Images CSS |
---drive-alternate-export
-If this option is set this instructs rclone to use an alternate set of export URLs for drive documents. Users have reported that the official export URLs can't export large documents, whereas these unofficial ones can.
-See rclone issue #2243 for background, this google drive issue and this helpful post.
---drive-impersonate user
-When using a service account, this instructs rclone to impersonate the user passed in.
---drive-keep-revision-forever
-Keeps new head revision of the file forever.
---drive-list-chunk int
-Size of listing chunk 100-1000. 0 to disable. (default 1000)
---drive-shared-with-me
-Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you).
-This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too.
+Google douments can also be exported as link files. These files will open a browser window for the Google Docs website of that dument when opened. The link file extension has to be specified as a --drive-export-formats
parameter. They will match all available Google Documents.
+
+
+
+
+
+
+desktop |
+freedesktop.org specified desktop entry |
+Linux |
+
+
+link.html |
+An HTML Document with a redirect |
+All |
+
+
+url |
+INI style link file |
+macOS, Windows |
+
+
+webloc |
+macOS specific XML format |
+macOS |
+
+
+
+
+Standard Options
+Here are the standard options specific to drive (Google Drive).
+--drive-client-id
+Google Application Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_DRIVE_CLIENT_ID
+- Type: string
+- Default: ""
+
+--drive-client-secret
+Google Application Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_DRIVE_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+--drive-scope
+Scope that rclone should use when requesting access from drive.
+
+- Config: scope
+- Env Var: RCLONE_DRIVE_SCOPE
+- Type: string
+- Default: ""
+- Examples:
+
+- "drive"
+
+- Full access all files, excluding Application Data Folder.
+
+- "drive.readonly"
+
+- Read-only access to file metadata and file contents.
+
+- "drive.file"
+
+- Access to files created by rclone only.
+- These are visible in the drive website.
+- File authorization is revoked when the user deauthorizes the app.
+
+- "drive.appfolder"
+
+- Allows read and write access to the Application Data folder.
+- This is not visible in the drive website.
+
+- "drive.metadata.readonly"
+
+- Allows read-only access to file metadata but
+- does not allow any access to read or download file content.
+
+
+
+--drive-root-folder-id
+ID of the root folder Leave blank normally. Fill in to access "Computers" folders. (see docs).
+
+- Config: root_folder_id
+- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
+- Type: string
+- Default: ""
+
+--drive-service-account-file
+Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.
+
+- Config: service_account_file
+- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to drive (Google Drive).
+--drive-service-account-credentials
+Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.
+
+- Config: service_account_credentials
+- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
+- Type: string
+- Default: ""
+
+--drive-team-drive
+ID of the Team Drive
+
+- Config: team_drive
+- Env Var: RCLONE_DRIVE_TEAM_DRIVE
+- Type: string
+- Default: ""
+
+--drive-auth-owner-only
+Only consider files owned by the authenticated user.
+
+- Config: auth_owner_only
+- Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
+- Type: bool
+- Default: false
+
+--drive-use-trash
+Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false
to delete files permanently instead.
+
+- Config: use_trash
+- Env Var: RCLONE_DRIVE_USE_TRASH
+- Type: bool
+- Default: true
+
--drive-skip-gdocs
Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
+
+- Config: skip_gdocs
+- Env Var: RCLONE_DRIVE_SKIP_GDOCS
+- Type: bool
+- Default: false
+
+--drive-shared-with-me
+Only show files that are shared with me.
+Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you).
+This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too.
+
+- Config: shared_with_me
+- Env Var: RCLONE_DRIVE_SHARED_WITH_ME
+- Type: bool
+- Default: false
+
--drive-trashed-only
Only show files that are in the trash. This will show trashed files in their original directory structure.
---drive-upload-cutoff=SIZE
-File size cutoff for switching to chunked upload. Default is 8 MB.
---drive-use-trash
-Controls whether files are sent to the trash or deleted permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false
to delete files permanently instead.
+
+- Config: trashed_only
+- Env Var: RCLONE_DRIVE_TRASHED_ONLY
+- Type: bool
+- Default: false
+
+
+Deprecated: see export_formats
+
+- Config: formats
+- Env Var: RCLONE_DRIVE_FORMATS
+- Type: string
+- Default: ""
+
+
+Comma separated list of preferred formats for downloading Google docs.
+
+- Config: export_formats
+- Env Var: RCLONE_DRIVE_EXPORT_FORMATS
+- Type: string
+- Default: "docx,xlsx,pptx,svg"
+
+
+Comma separated list of preferred formats for uploading Google docs.
+
+- Config: import_formats
+- Env Var: RCLONE_DRIVE_IMPORT_FORMATS
+- Type: string
+- Default: ""
+
+--drive-allow-import-name-change
+Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+
+- Config: allow_import_name_change
+- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
+- Type: bool
+- Default: false
+
--drive-use-created-date
-Use the file creation date in place of the modification date. Defaults to false.
+Use file created date instead of modified date.,
Useful when downloading data and you want the creation date used in place of the last modified date.
WARNING: This flag may have some unexpected consequences.
-When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the --checksum
flag.
+When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag.
This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date.
+
+- Config: use_created_date
+- Env Var: RCLONE_DRIVE_USE_CREATED_DATE
+- Type: bool
+- Default: false
+
+--drive-list-chunk
+Size of listing chunk 100-1000. 0 to disable.
+
+- Config: list_chunk
+- Env Var: RCLONE_DRIVE_LIST_CHUNK
+- Type: int
+- Default: 1000
+
+--drive-impersonate
+Impersonate this user when using a service account.
+
+- Config: impersonate
+- Env Var: RCLONE_DRIVE_IMPERSONATE
+- Type: string
+- Default: ""
+
+--drive-alternate-export
+Use alternate export URLs for google documents export.,
+If this option is set this instructs rclone to use an alternate set of export URLs for drive documents. Users have reported that the official export URLs can't export large documents, whereas these unofficial ones can.
+See rclone issue #2243 for background, this google drive issue and this helpful post.
+
+- Config: alternate_export
+- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
+- Type: bool
+- Default: false
+
+--drive-upload-cutoff
+Cutoff for switching to chunked upload
+
+- Config: upload_cutoff
+- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 8M
+
+--drive-chunk-size
+Upload chunk size. Must a power of 2 >= 256k.
+Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
+Reducing this will reduce memory usage but decrease performance.
+
+- Config: chunk_size
+- Env Var: RCLONE_DRIVE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 8M
+
+--drive-acknowledge-abuse
+Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.
+
+- Config: acknowledge_abuse
+- Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
+- Type: bool
+- Default: false
+
+--drive-keep-revision-forever
+Keep new head revision of each file forever.
+
+- Config: keep_revision_forever
+- Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
+- Type: bool
+- Default: false
+
+--drive-v2-download-min-size
+If Object's are greater, use drive v2 API to download.
+
+- Config: v2_download_min_size
+- Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
+- Type: SizeSuffix
+- Default: off
+
+
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.
Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with --disable copy
to download and upload the files if you prefer.
@@ -5522,6 +7405,25 @@ e/n/d/r/c/s/q> q
Usage without a config file
Since the http remote only has one config parameter it is easy to use without a config file:
rclone lsd --http-url https://beta.rclone.org :http:
+
+Standard Options
+Here are the standard options specific to http (http Connection).
+--http-url
+URL of http host to connect to
+
+- Config: url
+- Env Var: RCLONE_HTTP_URL
+- Type: string
+- Default: ""
+- Examples:
+
+- "https://example.com"
+
+- Connect to example.com
+
+
+
+
Hubic
Paths are specified as remote:path
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:container/path/to/dir
.
@@ -5604,6 +7506,37 @@ y/e/d> y
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
Note that Hubic wraps the Swift backend, so most of the properties of are the same.
+
+Standard Options
+Here are the standard options specific to hubic (Hubic).
+--hubic-client-id
+Hubic Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_HUBIC_CLIENT_ID
+- Type: string
+- Default: ""
+
+--hubic-client-secret
+Hubic Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_HUBIC_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to hubic (Hubic).
+--hubic-chunk-size
+Above this size files will be chunked into a _segments container.
+Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.
+
+- Config: chunk_size
+- Env Var: RCLONE_HUBIC_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5G
+
+
Limitations
This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.
The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
@@ -5667,22 +7600,88 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an Jottacloud directory called backup
rclone copy /home/source remote:backup
+--fast-list
+This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
+Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.
Modified time and hashes
Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
Jottacloud supports MD5 type hashes, so you can use the --checksum
flag.
Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR
environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit
flag.
Deleting files
-Any files you delete with rclone will end up in the trash. Due to a lack of API documentation emptying the trash is currently only possible via the Jottacloud website.
-Versions
+By default rclone will send all files to the trash when deleting files. Due to a lack of API documentation emptying the trash is currently only possible via the Jottacloud website. If deleting permanently is required then use the --jottacloud-hard-delete
flag, or set the equivalent environment variable.
+Versions
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
+
+To view your current quota you can use the rclone about remote:
command which will display your usage limit (unless it is unlimited) and the current usage.
+
+Standard Options
+Here are the standard options specific to jottacloud (JottaCloud).
+--jottacloud-user
+User Name
+
+- Config: user
+- Env Var: RCLONE_JOTTACLOUD_USER
+- Type: string
+- Default: ""
+
+--jottacloud-pass
+Password.
+
+- Config: pass
+- Env Var: RCLONE_JOTTACLOUD_PASS
+- Type: string
+- Default: ""
+
+--jottacloud-mountpoint
+The mountpoint to use.
+
+- Config: mountpoint
+- Env Var: RCLONE_JOTTACLOUD_MOUNTPOINT
+- Type: string
+- Default: ""
+- Examples:
+
+- "Sync"
+
+- Will be synced by the official client.
+
+- "Archive"
+
+
+
+Advanced Options
+Here are the advanced options specific to jottacloud (JottaCloud).
+--jottacloud-md5-memory-limit
+Files bigger than this will be cached on disk to calculate the MD5 if required.
+
+- Config: md5_memory_limit
+- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
+- Type: SizeSuffix
+- Default: 10M
+
+--jottacloud-hard-delete
+Delete files permanently rather than putting them into the trash.
+
+- Config: hard_delete
+- Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
+- Type: bool
+- Default: false
+
+--jottacloud-unlink
+Remove existing public link to file/folder with link command rather than creating. Default is false, meaning link command will create or retrieve public link.
+
+- Config: unlink
+- Env Var: RCLONE_JOTTACLOUD_UNLINK
+- Type: bool
+- Default: false
+
+
Limitations
Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
Jottacloud only supports filenames up to 255 characters in length.
-Specific options
-Here are the command line options specific to this cloud storage system.
---jottacloud-md5-memory-limit SizeSuffix
-Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
Troubleshooting
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.
Mega
@@ -5745,12 +7744,46 @@ y/e/d> y
Mega can have two files with exactly the same name and path (unlike a normal file system).
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
-Specific options
-Here are the command line options specific to this cloud storage system.
+
+Standard Options
+Here are the standard options specific to mega (Mega).
+--mega-user
+User name
+
+- Config: user
+- Env Var: RCLONE_MEGA_USER
+- Type: string
+- Default: ""
+
+--mega-pass
+Password.
+
+- Config: pass
+- Env Var: RCLONE_MEGA_PASS
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to mega (Mega).
--mega-debug
-If this flag is set (along with -vv
) it will print further debugging information from the mega backend.
+Output more debug from Mega.
+If this flag is set (along with -vv) it will print further debugging information from the mega backend.
+
+- Config: debug
+- Env Var: RCLONE_MEGA_DEBUG
+- Type: bool
+- Default: false
+
--mega-hard-delete
-Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this flag (or set it in the advanced config) then rclone will permanently delete objects instead.
+Delete files permanently rather than putting them into the trash.
+Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead.
+
+- Config: hard_delete
+- Env Var: RCLONE_MEGA_HARD_DELETE
+- Type: bool
+- Default: false
+
+
Limitations
This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.
Mega allows duplicate files which may confuse rclone.
@@ -5827,7 +7860,7 @@ y/e/d> y
rclone ls remote:container
Sync /home/local/directory
to the remote container, deleting any excess files in the container.
rclone sync /home/local/directory remote:container
---fast-list
+--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Modified time
The modified time is stored as metadata on the object with the mtime
key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.
@@ -5853,14 +7886,80 @@ rclone ls azureblob:othercontainer
The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to --transfers
of them being uploaded at once.
Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M
.
Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks.
-Specific options
-Here are the command line options specific to this cloud storage system.
---azureblob-upload-cutoff=SIZE
-Cutoff for switching to chunked upload - must be <= 256MB. The default is 256MB.
---azureblob-chunk-size=SIZE
-Upload chunk size. Default 4MB. Note that this is stored in memory and there may be up to --transfers
chunks stored at once in memory. This can be at most 100MB.
---azureblob-access-tier=Hot/Cool/Archive
-Azure storage supports blob tiering, you can configure tier in advanced settings or supply flag while performing data transfer operations. If there is no access tier
specified, rclone doesn't apply any tier. rclone performs Set Tier
operation on blobs while uploading, if objects are not modified, specifying access tier
to new one will have no effect. If blobs are in archive tier
at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to Hot
or Cool
.
+
+Standard Options
+Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
+--azureblob-account
+Storage Account Name (leave blank to use connection string or SAS URL)
+
+- Config: account
+- Env Var: RCLONE_AZUREBLOB_ACCOUNT
+- Type: string
+- Default: ""
+
+--azureblob-key
+Storage Account Key (leave blank to use connection string or SAS URL)
+
+- Config: key
+- Env Var: RCLONE_AZUREBLOB_KEY
+- Type: string
+- Default: ""
+
+--azureblob-sas-url
+SAS URL for container level access only (leave blank if using account/key or connection string)
+
+- Config: sas_url
+- Env Var: RCLONE_AZUREBLOB_SAS_URL
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
+--azureblob-endpoint
+Endpoint for the service Leave blank normally.
+
+- Config: endpoint
+- Env Var: RCLONE_AZUREBLOB_ENDPOINT
+- Type: string
+- Default: ""
+
+--azureblob-upload-cutoff
+Cutoff for switching to chunked upload (<= 256MB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 256M
+
+--azureblob-chunk-size
+Upload chunk size (<= 100MB).
+Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory.
+
+- Config: chunk_size
+- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 4M
+
+--azureblob-list-chunk
+Size of blob list.
+This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out.
+
+- Config: list_chunk
+- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
+- Type: int
+- Default: 5000
+
+--azureblob-access-tier
+Access tier of blob: hot, cool or archive.
+Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level
+If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool".
+
+- Config: access_tier
+- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
+- Type: string
+- Default: ""
+
+
Limitations
MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.
Microsoft OneDrive
@@ -5870,51 +7969,36 @@ rclone ls azureblob:othercontainer
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+e) Edit existing remote
n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
s) Set configuration password
-n/s> n
+q) Quit config
+e/n/d/r/c/s/q> n
name> remote
Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
-10 / Microsoft OneDrive
+...
+17 / Microsoft OneDrive
\ "onedrive"
-11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-12 / SSH/SFTP Connection
- \ "sftp"
-13 / Yandex Disk
- \ "yandex"
-Storage> 10
-Microsoft App Client Id - leave blank normally.
+...
+Storage> 17
+Microsoft App Client Id
+Leave blank normally.
+Enter a string value. Press Enter for the default ("").
client_id>
-Microsoft App Client Secret - leave blank normally.
+Microsoft App Client Secret
+Leave blank normally.
+Enter a string value. Press Enter for the default ("").
client_secret>
+Edit advanced config? (y/n)
+y) Yes
+n) No
+y/n> n
Remote config
-Choose OneDrive account type?
- * Say b for a OneDrive business account
- * Say p for a personal OneDrive account
-b) Business
-p) Personal
-b/p> p
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
@@ -5925,11 +8009,32 @@ If your browser doesn't open automatically go to the following link: http://
Log in and authorize rclone for access
Waiting for code...
Got code
+Choose a number from below, or type in an existing value
+ 1 / OneDrive Personal or Business
+ \ "onedrive"
+ 2 / Sharepoint site
+ \ "sharepoint"
+ 3 / Type in driveID
+ \ "driveid"
+ 4 / Type in SiteID
+ \ "siteid"
+ 5 / Search a Sharepoint site
+ \ "search"
+Your choice> 1
+Found 1 drives, please select the one you want to use:
+0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
+Chose drive to use:> 0
+Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
+Is that okay?
+y) Yes
+n) No
+y/n> y
--------------------
[remote]
-client_id =
-client_secret =
-token = {"access_token":"XXXXXX"}
+type = onedrive
+token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
+drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
+drive_type = business
--------------------
y) Yes this is OK
e) Edit this remote
@@ -5944,25 +8049,79 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
-OneDrive for Business
-There is additional support for OneDrive for Business. Select "b" when ask
-Choose OneDrive account type?
- * Say b for a OneDrive business account
- * Say p for a personal OneDrive account
-b) Business
-p) Personal
-b/p>
-After that rclone requires an authentication of your account. The application will first authenticate your account, then query the OneDrive resource URL and do a second (silent) authentication for this resource URL.
+Getting your own Client ID and Key
+rclone uses a pair of Client ID and Key shared by all rclone users when performing requests by default. If you are having problems with them (E.g., seeing a lot of throttling), you can get your own Client ID and Key by following the steps below:
+
+- Open https://apps.dev.microsoft.com/#/appList, then click
Add an app
(Choose Converged applications
if applicable)
+- Enter a name for your app, and click continue. Copy and keep the
Application Id
under the app name for later use.
+- Under section
Application Secrets
, click Generate New Password
. Copy and keep that password for later use.
+- Under section
Platforms
, click Add platform
, then Web
. Enter http://localhost:53682/
in Redirect URLs
.
+- Under section
Microsoft Graph Permissions
, Add
these delegated permissions
: Files.Read
, Files.ReadWrite
, Files.Read.All
, Files.ReadWrite.All
, offline_access
, User.Read
.
+- Scroll to the bottom and click
Save
.
+
+Now the application is complete. Run rclone config
to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.
Modified time and hashes
OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.
For all types of OneDrive you can use the --checksum
flag.
Deleting files
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
-Specific options
-Here are the command line options specific to this cloud storage system.
---onedrive-chunk-size=SIZE
-Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.
+
+Standard Options
+Here are the standard options specific to onedrive (Microsoft OneDrive).
+--onedrive-client-id
+Microsoft App Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_ONEDRIVE_CLIENT_ID
+- Type: string
+- Default: ""
+
+--onedrive-client-secret
+Microsoft App Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+Here are the advanced options specific to onedrive (Microsoft OneDrive).
+--onedrive-chunk-size
+Chunk size to upload files with - must be multiple of 320k.
+Above this size files will be chunked - must be multiple of 320k. Note that the chunks will be buffered into memory.
+
+- Config: chunk_size
+- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 10M
+
+--onedrive-drive-id
+The ID of the drive to use
+
+- Config: drive_id
+- Env Var: RCLONE_ONEDRIVE_DRIVE_ID
+- Type: string
+- Default: ""
+
+--onedrive-drive-type
+The type of the drive ( personal | business | documentLibrary )
+
+- Config: drive_type
+- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
+- Type: string
+- Default: ""
+
+--onedrive-expose-onenote-files
+Set to make OneNote files show up in directory listings.
+By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option.
+
+- Config: expose_onenote_files
+- Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
+- Type: bool
+- Default: false
+
+
Limitations
Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
@@ -6057,8 +8216,26 @@ y/e/d> y
rclone copy /home/source remote:backup
Modified time and MD5SUMs
OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
-Deleting files
-Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the OpenDrive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.
+
+Standard Options
+Here are the standard options specific to opendrive (OpenDrive).
+--opendrive-username
+Username
+
+- Config: username
+- Env Var: RCLONE_OPENDRIVE_USERNAME
+- Type: string
+- Default: ""
+
+--opendrive-password
+Password.
+
+- Config: password
+- Env Var: RCLONE_OPENDRIVE_PASSWORD
+- Type: string
+- Default: ""
+
+
Limitations
Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
@@ -6157,13 +8334,13 @@ y/e/d> y
rclone ls remote:bucket
Sync /home/local/directory
to the remote bucket, deleting any excess files in the bucket.
rclone sync /home/local/directory remote:bucket
---fast-list
+--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Multipart uploads
rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.
Buckets and Zone
With QingStor you can list buckets (rclone lsd
) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone
.
-Authentication
+Authentication
There are two ways to supply rclone
with a set of QingStor credentials. In order of precedence:
- Directly in the rclone configuration file (as configured by
rclone config
)
@@ -6176,6 +8353,89 @@ y/e/d> y
- Secret Access Key:
QS_SECRET_ACCESS_KEY
or QS_SECRET_KEY
+
+Standard Options
+Here are the standard options specific to qingstor (QingCloud Object Storage).
+--qingstor-env-auth
+Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+
+- Config: env_auth
+- Env Var: RCLONE_QINGSTOR_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+
+- "false"
+
+- Enter QingStor credentials in the next step
+
+- "true"
+
+- Get QingStor credentials from the environment (env vars or IAM)
+
+
+
+--qingstor-access-key-id
+QingStor Access Key ID Leave blank for anonymous access or runtime credentials.
+
+- Config: access_key_id
+- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
+- Type: string
+- Default: ""
+
+--qingstor-secret-access-key
+QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials.
+
+- Config: secret_access_key
+- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
+- Type: string
+- Default: ""
+
+--qingstor-endpoint
+Enter a endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443"
+
+- Config: endpoint
+- Env Var: RCLONE_QINGSTOR_ENDPOINT
+- Type: string
+- Default: ""
+
+--qingstor-zone
+Zone to connect to. Default is "pek3a".
+
+- Config: zone
+- Env Var: RCLONE_QINGSTOR_ZONE
+- Type: string
+- Default: ""
+- Examples:
+
+- "pek3a"
+
+- The Beijing (China) Three Zone
+- Needs location constraint pek3a.
+
+- "sh1a"
+
+- The Shanghai (China) First Zone
+- Needs location constraint sh1a.
+
+- "gd2a"
+
+- The Guangdong (China) Second Zone
+- Needs location constraint gd2a.
+
+
+
+Advanced Options
+Here are the advanced options specific to qingstor (QingCloud Object Storage).
+--qingstor-connection-retries
+Number of connnection retries.
+
+- Config: connection_retries
+- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
+- Type: int
+- Default: 3
+
+
Swift
Swift refers to Openstack Object Storage. Commercial implementations of that being:
@@ -6355,17 +8615,215 @@ tenant = $OS_TENANT_NAME
export RCLONE_CONFIG_MYREMOTE_TYPE=swift
export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
rclone lsd myremote:
---fast-list
+--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
--update and --use-server-modtime
As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
-Specific options
-Here are the command line options specific to this cloud storage system.
---swift-storage-policy=STRING
-Apply the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.
---swift-chunk-size=SIZE
+
+Standard Options
+Here are the standard options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
+--swift-env-auth
+Get swift credentials from environment variables in standard OpenStack form.
+
+- Config: env_auth
+- Env Var: RCLONE_SWIFT_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+
+- "false"
+
+- Enter swift credentials in the next step
+
+- "true"
+
+- Get swift credentials from environment vars. Leave other fields blank if using this.
+
+
+
+--swift-user
+User name to log in (OS_USERNAME).
+
+- Config: user
+- Env Var: RCLONE_SWIFT_USER
+- Type: string
+- Default: ""
+
+--swift-key
+API key or password (OS_PASSWORD).
+
+- Config: key
+- Env Var: RCLONE_SWIFT_KEY
+- Type: string
+- Default: ""
+
+--swift-auth
+Authentication URL for server (OS_AUTH_URL).
+
+- Config: auth
+- Env Var: RCLONE_SWIFT_AUTH
+- Type: string
+- Default: ""
+- Examples:
+
+- "https://auth.api.rackspacecloud.com/v1.0"
+
+- "https://lon.auth.api.rackspacecloud.com/v1.0"
+
+- "https://identity.api.rackspacecloud.com/v2.0"
+
+- "https://auth.storage.memset.com/v1.0"
+
+- "https://auth.storage.memset.com/v2.0"
+
+- "https://auth.cloud.ovh.net/v2.0"
+
+
+
+--swift-user-id
+User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+
+- Config: user_id
+- Env Var: RCLONE_SWIFT_USER_ID
+- Type: string
+- Default: ""
+
+--swift-domain
+User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+
+- Config: domain
+- Env Var: RCLONE_SWIFT_DOMAIN
+- Type: string
+- Default: ""
+
+--swift-tenant
+Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+
+- Config: tenant
+- Env Var: RCLONE_SWIFT_TENANT
+- Type: string
+- Default: ""
+
+--swift-tenant-id
+Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+
+- Config: tenant_id
+- Env Var: RCLONE_SWIFT_TENANT_ID
+- Type: string
+- Default: ""
+
+--swift-tenant-domain
+Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+
+- Config: tenant_domain
+- Env Var: RCLONE_SWIFT_TENANT_DOMAIN
+- Type: string
+- Default: ""
+
+--swift-region
+Region name - optional (OS_REGION_NAME)
+
+- Config: region
+- Env Var: RCLONE_SWIFT_REGION
+- Type: string
+- Default: ""
+
+--swift-storage-url
+Storage URL - optional (OS_STORAGE_URL)
+
+- Config: storage_url
+- Env Var: RCLONE_SWIFT_STORAGE_URL
+- Type: string
+- Default: ""
+
+--swift-auth-token
+Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+
+- Config: auth_token
+- Env Var: RCLONE_SWIFT_AUTH_TOKEN
+- Type: string
+- Default: ""
+
+--swift-auth-version
+AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+
+- Config: auth_version
+- Env Var: RCLONE_SWIFT_AUTH_VERSION
+- Type: int
+- Default: 0
+
+--swift-endpoint-type
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
+
+- Config: endpoint_type
+- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
+- Type: string
+- Default: "public"
+- Examples:
+
+- "public"
+
+- Public (default, choose this if not sure)
+
+- "internal"
+
+- Internal (use internal service net)
+
+- "admin"
+
+
+
+--swift-storage-policy
+The storage policy to use when creating a new container
+This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.
+
+- Config: storage_policy
+- Env Var: RCLONE_SWIFT_STORAGE_POLICY
+- Type: string
+- Default: ""
+- Examples:
+
+- ""
+
+- "pcs"
+
+- OVH Public Cloud Storage
+
+- "pca"
+
+- OVH Public Cloud Archive
+
+
+
+Advanced Options
+Here are the advanced options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
+--swift-chunk-size
+Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.
+
+- Config: chunk_size
+- Env Var: RCLONE_SWIFT_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5G
+
+
Modified time
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
@@ -6469,8 +8927,28 @@ y/e/d> y
Modified time and hashes
pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.
pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum
flag.
-Deleting files
+Deleting files
Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup
can be used to empty the trash.
+
+Standard Options
+Here are the standard options specific to pcloud (Pcloud).
+--pcloud-client-id
+Pcloud App Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_PCLOUD_CLIENT_ID
+- Type: string
+- Default: ""
+
+--pcloud-client-secret
+Pcloud App Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_PCLOUD_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+
SFTP
SFTP is the Secure (or SSH) File Transfer Protocol.
SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.
@@ -6572,20 +9050,119 @@ y/e/d> y
And then at the end of the session
eval `ssh-agent -k`
These commands can be used in scripts of course.
-Specific options
-Here are the command line options specific to this remote.
---sftp-ask-password
-Ask for the SFTP password if needed when no password has been configured.
---ssh-path-override
-Override path used by SSH connection. Allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes.
-Shared folders can be found in directories representing volumes
-rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
-Home directory can be found in a shared folder called homes
-rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
Modified time
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false
in your RClone backend configuration to disable this behaviour.
+
+Standard Options
+Here are the standard options specific to sftp (SSH/SFTP Connection).
+--sftp-host
+SSH host to connect to
+
+- Config: host
+- Env Var: RCLONE_SFTP_HOST
+- Type: string
+- Default: ""
+- Examples:
+
+- "example.com"
+
+- Connect to example.com
+
+
+
+--sftp-user
+SSH username, leave blank for current username, ncw
+
+- Config: user
+- Env Var: RCLONE_SFTP_USER
+- Type: string
+- Default: ""
+
+--sftp-port
+SSH port, leave blank to use default (22)
+
+- Config: port
+- Env Var: RCLONE_SFTP_PORT
+- Type: string
+- Default: ""
+
+--sftp-pass
+SSH password, leave blank to use ssh-agent.
+
+- Config: pass
+- Env Var: RCLONE_SFTP_PASS
+- Type: string
+- Default: ""
+
+--sftp-key-file
+Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+
+- Config: key_file
+- Env Var: RCLONE_SFTP_KEY_FILE
+- Type: string
+- Default: ""
+
+--sftp-use-insecure-cipher
+Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+
+- Config: use_insecure_cipher
+- Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
+- Type: bool
+- Default: false
+- Examples:
+
+- "false"
+
+- Use default Cipher list.
+
+- "true"
+
+- Enables the use of the aes128-cbc cipher.
+
+
+
+--sftp-disable-hashcheck
+Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
+
+- Config: disable_hashcheck
+- Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
+- Type: bool
+- Default: false
+
+Advanced Options
+Here are the advanced options specific to sftp (SSH/SFTP Connection).
+--sftp-ask-password
+Allow asking for SFTP password when needed.
+
+- Config: ask_password
+- Env Var: RCLONE_SFTP_ASK_PASSWORD
+- Type: bool
+- Default: false
+
+--sftp-path-override
+Override path used by SSH connection.
+This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes.
+Shared folders can be found in directories representing volumes
+rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
+Home directory can be found in a shared folder called "home"
+rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
+
+- Config: path_override
+- Env Var: RCLONE_SFTP_PATH_OVERRIDE
+- Type: string
+- Default: ""
+
+--sftp-set-modtime
+Set the modified time on the remote if set.
+
+- Config: set_modtime
+- Env Var: RCLONE_SFTP_SET_MODTIME
+- Type: bool
+- Default: true
+
+
Limitations
SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck
to true
to disable checksumming.
Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck
is a good idea.
@@ -6594,6 +9171,126 @@ y/e/d> y
SFTP isn't supported under plan9 until this issue is fixed.
Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
+Union
+The union
remote provides a unification similar to UnionFS using other remotes.
+Paths may be as deep as required or a local path, eg remote:directory/subdirectory
or /directory/subdirectory
.
+During the initial setup with rclone config
you will specify the target remotes as a space separated list. The target remotes can either be a local paths or other remotes.
+The order of the remotes is important as it defines which remotes take precedence over others if there are files with the same name in the same logical path. The last remote is the topmost remote and replaces files with the same name from previous remotes.
+Only the last remote is used to write to and delete from, all other remotes are read-only.
+Subfolders can be used in target remote. Asume a union remote named backup
with the remotes mydrive:private/backup mydrive2:/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive2:/backup/desktop
.
+There will be no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive2:/backup/../desktop
.
+Here is an example of how to make a union called remote
for local folders. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
+ \ "amazon cloud drive"
+ 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
+ \ "s3"
+ 4 / Backblaze B2
+ \ "b2"
+ 5 / Box
+ \ "box"
+ 6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes
+ \ "union"
+ 7 / Cache a remote
+ \ "cache"
+ 8 / Dropbox
+ \ "dropbox"
+ 9 / Encrypt/Decrypt a remote
+ \ "crypt"
+10 / FTP Connection
+ \ "ftp"
+11 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+12 / Google Drive
+ \ "drive"
+13 / Hubic
+ \ "hubic"
+14 / JottaCloud
+ \ "jottacloud"
+15 / Local Disk
+ \ "local"
+16 / Mega
+ \ "mega"
+17 / Microsoft Azure Blob Storage
+ \ "azureblob"
+18 / Microsoft OneDrive
+ \ "onedrive"
+19 / OpenDrive
+ \ "opendrive"
+20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+21 / Pcloud
+ \ "pcloud"
+22 / QingCloud Object Storage
+ \ "qingstor"
+23 / SSH/SFTP Connection
+ \ "sftp"
+24 / Webdav
+ \ "webdav"
+25 / Yandex Disk
+ \ "yandex"
+26 / http Connection
+ \ "http"
+Storage> union
+List of space separated remotes.
+Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc.
+The last remote is used to write to.
+Enter a string value. Press Enter for the default ("").
+remotes>
+Remote config
+--------------------
+[remote]
+type = union
+remotes = C:\dir1 C:\dir2 C:\dir3
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+remote union
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+Once configured you can then use rclone
like this,
+List directories in top level in C:\dir1
, C:\dir2
and C:\dir3
+rclone lsd remote:
+List all the files in C:\dir1
, C:\dir2
and C:\dir3
+rclone ls remote:
+Copy another local directory to the union directory called source, which will be placed into C:\dir3
+rclone copy C:\source remote:source
+
+Standard Options
+Here are the standard options specific to union (A stackable unification remote, which can appear to merge the contents of several remotes).
+--union-remotes
+List of space separated remotes. Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc. The last remote is used to write to.
+
+- Config: remotes
+- Env Var: RCLONE_UNION_REMOTES
+- Type: string
+- Default: ""
+
+
WebDAV
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
@@ -6667,6 +9364,76 @@ y/e/d> y
Modified time and hashes
Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
Hashes are not supported.
+
+Standard Options
+Here are the standard options specific to webdav (Webdav).
+--webdav-url
+URL of http host to connect to
+
+- Config: url
+- Env Var: RCLONE_WEBDAV_URL
+- Type: string
+- Default: ""
+- Examples:
+
+- "https://example.com"
+
+- Connect to example.com
+
+
+
+--webdav-vendor
+Name of the Webdav site/service/software you are using
+
+- Config: vendor
+- Env Var: RCLONE_WEBDAV_VENDOR
+- Type: string
+- Default: ""
+- Examples:
+
+- "nextcloud"
+
+- "owncloud"
+
+- "sharepoint"
+
+- "other"
+
+- Other site/service or software
+
+
+
+--webdav-user
+User name
+
+- Config: user
+- Env Var: RCLONE_WEBDAV_USER
+- Type: string
+- Default: ""
+
+--webdav-pass
+Password.
+
+- Config: pass
+- Env Var: RCLONE_WEBDAV_PASS
+- Type: string
+- Default: ""
+
+--webdav-bearer-token
+Bearer token instead of user/pass (eg a Macaroon)
+
+- Config: bearer_token
+- Env Var: RCLONE_WEBDAV_BEARER_TOKEN
+- Type: string
+- Default: ""
+
+
Provider notes
See below for notes on specific providers.
Owncloud
@@ -6791,7 +9558,7 @@ y/e/d> y
rclone ls remote:directory
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync /home/local/directory remote:directory
---fast-list
+--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Modified time
Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified
in RFC3339 with nanoseconds format.
@@ -6799,6 +9566,26 @@ y/e/d> y
MD5 checksums are natively supported by Yandex Disk.
Emptying Trash
If you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This command does not take any path arguments.
+
+Standard Options
+Here are the standard options specific to yandex (Yandex Disk).
+--yandex-client-id
+Yandex Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_YANDEX_CLIENT_ID
+- Type: string
+- Default: ""
+
+--yandex-client-secret
+Yandex Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_YANDEX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+
Local Filesystem
Local paths are specified as normal filesystem paths, eg /path/to/wherever
, so
rclone sync /home/source /tmp/destination
@@ -6824,11 +9611,9 @@ nounc = true
And use rclone like this:
rclone copy c:\src nounc:z:\dst
This will use UNC paths on c:\src
but not on z:\dst
. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.
-Specific options
-Here are the command line options specific to local storage
---copy-links, -L
+Symlinks / Junction points
Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
-If you supply this flag then rclone will follow the symlink and copy the pointed to file or directory.
+If you supply --copy-links
or -L
then rclone will follow the symlink and copy the pointed to file or directory.
This flag applies to all commands.
For example, supposing you have a directory structure like this
$ tree /tmp/a
@@ -6849,14 +9634,9 @@ nounc = true
6 two/three
6 b/two
6 b/one
---local-no-check-updated
-Don't check to see if the files change during upload.
-Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts can't copy - source file is being updated
if the file changes during upload.
-However on some file systems this modification time check may fail (eg Glusterfs #2206) so this check can be disabled with this flag.
---local-no-unicode-normalization
-This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.
---one-file-system, -x
-This tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.
+Restricting filesystems with --one-file-system
+Normally rclone will recurse through filesystems as mounted.
+However if you set --one-file-system
or -x
this tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.
For example if you have a directory hierarchy like this
root
├── disk1 - disk1 mounted on the root
@@ -6875,11 +9655,237 @@ nounc = true
0 file1
0 file2
NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
-NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will not appear as an valid flag.
+NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will be ignored.
+
+Standard Options
+Here are the standard options specific to local (Local Disk).
+--local-nounc
+Disable UNC (long path names) conversion on Windows
+
+- Config: nounc
+- Env Var: RCLONE_LOCAL_NOUNC
+- Type: string
+- Default: ""
+- Examples:
+
+- "true"
+
+- Disables long file names
+
+
+
+Advanced Options
+Here are the advanced options specific to local (Local Disk).
+--copy-links
+Follow symlinks and copy the pointed to item.
+
+- Config: copy_links
+- Env Var: RCLONE_LOCAL_COPY_LINKS
+- Type: bool
+- Default: false
+
--skip-links
-This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.
+Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.
+
+- Config: skip_links
+- Env Var: RCLONE_LOCAL_SKIP_LINKS
+- Type: bool
+- Default: false
+
+--local-no-unicode-normalization
+Don't apply unicode normalization to paths and filenames (Deprecated)
+This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.
+
+- Config: no_unicode_normalization
+- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+- Type: bool
+- Default: false
+
+--local-no-check-updated
+Don't check to see if the files change during upload
+Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload.
+However on some file systems this modification time check may fail (eg Glusterfs #2206) so this check can be disabled with this flag.
+
+- Config: no_check_updated
+- Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
+- Type: bool
+- Default: false
+
+--one-file-system
+Don't cross filesystem boundaries (unix/macOS only).
+
+- Config: one_file_system
+- Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
+- Type: bool
+- Default: false
+
+
Changelog
-v1.42 - 2018-09-01
+v1.44 - 2018-10-15
+
+- New commands
+
+- serve ftp: Add ftp server (Antoine GIRARD)
+- settier: perform storage tier changes on supported remotes (sandeepkru)
+
+- New Features
+
+- Reworked command line help
+
+- Make default help less verbose (Nick Craig-Wood)
+- Split flags up into global and backend flags (Nick Craig-Wood)
+- Implement specialised help for flags and backends (Nick Craig-Wood)
+- Show URL of backend help page when starting config (Nick Craig-Wood)
+
+- stats: Long names now split in center (Joanna Marek)
+- Add --log-format flag for more control over log output (dcpu)
+- rc: Add support for OPTIONS and basic CORS (frenos)
+- stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
+
+- Bug Fixes
+
+- Fix -P not ending with a new line (Nick Craig-Wood)
+- config: don't create default config dir when user supplies --config (albertony)
+- Don't print non-ASCII characters with --progress on windows (Nick Craig-Wood)
+- Correct logs for excluded items (ssaqua)
+
+- Mount
+
+- Remove EXPERIMENTAL tags (Nick Craig-Wood)
+
+- VFS
+
+- Fix race condition detected by serve ftp tests (Nick Craig-Wood)
+- Add vfs/poll-interval rc command (Fabian Möller)
+- Enable rename for nearly all remotes using server side Move or Copy (Nick Craig-Wood)
+- Reduce directory cache cleared by poll-interval (Fabian Möller)
+- Remove EXPERIMENTAL tags (Nick Craig-Wood)
+
+- Local
+
+- Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
+- Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood)
+- Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
+
+- Cache
+
+- Add cache/fetch rc function (Fabian Möller)
+- Fix worker scale down (Fabian Möller)
+- Improve performance by not sending info requests for cached chunks (dcpu)
+- Fix error return value of cache/fetch rc method (Fabian Möller)
+- Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal)
+- Preserve leading / in wrapped remote path (Fabian Möller)
+- Add plex_insecure option to skip certificate validation (Fabian Möller)
+- Remove entries that no longer exist in the source (dcpu)
+
+- Crypt
+
+- Preserve leading / in wrapped remote path (Fabian Möller)
+
+- Alias
+
+- Fix handling of Windows network paths (Nick Craig-Wood)
+
+- Azure Blob
+
+- Add --azureblob-list-chunk parameter (Santiago Rodríguez)
+- Implemented settier command support on azureblob remote. (sandeepkru)
+- Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood)
+
+- Box
+
+- Implement link sharing. (Sebastian Bünger)
+
+- Drive
+
+- Add --drive-import-formats - google docs can now be imported (Fabian Möller)
+
+- Rewrite mime type and extension handling (Fabian Möller)
+- Add document links (Fabian Möller)
+- Add support for multipart document extensions (Fabian Möller)
+- Add support for apps-script to json export (Fabian Möller)
+- Fix escaped chars in documents during list (Fabian Möller)
+
+- Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller)
+- Improve directory notifications in ChangeNotify (Fabian Möller)
+- When listing team drives in config, continue on failure (Nick Craig-Wood)
+
+- FTP
+
+- Add a small pause after failed upload before deleting file (Nick Craig-Wood)
+
+- Google Cloud Storage
+
+- Fix service_account_file being ignored (Fabian Möller)
+
+- Jottacloud
+
+- Minor improvement in quota info (omit if unlimited) (albertony)
+- Add --fast-list support (albertony)
+- Add permanent delete support: --jottacloud-hard-delete (albertony)
+- Add link sharing support (albertony)
+- Fix handling of reserved characters. (Sebastian Bünger)
+- Fix socket leak on Object.Remove (Nick Craig-Wood)
+
+- Onedrive
+
+- Rework to support Microsoft Graph (Cnly)
+
+- NB this will require re-authenticating the remote
+
+- Removed upload cutoff and always do session uploads (Oliver Heyme)
+- Use single-part upload for empty files (Cnly)
+- Fix new fields not saved when editing old config (Alex Chen)
+- Fix sometimes special chars in filenames not replaced (Alex Chen)
+- Ignore OneNote files by default (Alex Chen)
+- Add link sharing support (jackyzy823)
+
+- S3
+
+- Use custom pacer, to retry operations when reasonable (Craig Miskell)
+- Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout)
+- Make --s3-v2-auth flag (Nick Craig-Wood)
+- Fix v2 auth on files with spaces (Nick Craig-Wood)
+
+- Union
+
+- Implement union backend which reads from multiple backends (Felix Brucker)
+- Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood)
+- Fix ChangeNotify to support multiple remotes (Fabian Möller)
+- Fix --backup-dir on union backend (Nick Craig-Wood)
+
+- WebDAV
+
+- Add another time format (Nick Craig-Wood)
+- Add a small pause after failed upload before deleting file (Nick Craig-Wood)
+- Add workaround for missing mtime (buergi)
+- Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
+
+- Yandex
+
+- Remove redundant nil checks (teresy)
+
+
+v1.43.1 - 2018-09-07
+Point release to fix hubic and azureblob backends.
+
+- Bug Fixes
+
+- ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
+- cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)
+- docs: Tidy website display (Anagh Kumar Baranwal)
+
+- Azure Blob:
+
+- Fix multi-part uploads. (sandeepkru)
+
+- Hubic
+
+- Fix uploads (Nick Craig-Wood)
+- Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood)
+
+
+v1.43 - 2018-09-01
- New backends
Forum
diff --git a/MANUAL.md b/MANUAL.md
index d47cb786a..64063b5e2 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Sep 01, 2018
+% Oct 15, 2018
Rclone
======
@@ -54,8 +54,9 @@ Features
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
- * Optional encryption ([Crypt](https://rclone.org/crypt/))
- * Optional cache ([Cache](https://rclone.org/cache/))
+ * ([Encryption](https://rclone.org/crypt/)) backend
+ * ([Cache](https://rclone.org/cache/)) backend
+ * ([Union](https://rclone.org/union/)) backend
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
Links
@@ -219,6 +220,7 @@ See the following for detailed instructions for
* [Pcloud](https://rclone.org/pcloud/)
* [QingStor](https://rclone.org/qingstor/)
* [SFTP](https://rclone.org/sftp/)
+ * [Union](https://rclone.org/union/)
* [WebDAV](https://rclone.org/webdav/)
* [Yandex Disk](https://rclone.org/yandex/)
* [The local filesystem](https://rclone.org/local/)
@@ -1854,7 +1856,7 @@ rclone lsjson remote:path [flags]
## rclone mount
-Mount the remote as a mountpoint. **EXPERIMENTAL**
+Mount the remote as file system on a mountpoint.
### Synopsis
@@ -1863,8 +1865,6 @@ rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
-This is **EXPERIMENTAL** - use with care.
-
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
Start the mount like this
@@ -1939,7 +1939,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
-uploads. Look at the **EXPERIMENTAL** [file caching](#file-caching)
+uploads. Look at the [file caching](#file-caching)
for solutions to make mount mount more reliable.
### Attribute caching
@@ -2049,8 +2049,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching
-**NB** File caching is **EXPERIMENTAL** - use with care!
-
These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a
normal file system.
@@ -2404,6 +2402,190 @@ rclone serve [opts] [flags]
-h, --help help for serve
```
+## rclone serve ftp
+
+Serve remote:path over FTP.
+
+### Synopsis
+
+
+rclone serve ftp implements a basic ftp server to serve the
+remote over FTP protocol. This can be viewed with a ftp client
+or you can make a remote of type ftp to read and write it.
+
+### Server options
+
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost. You can use port
+:0 to let the OS choose an available port.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication is advised - see the next section for info.
+
+#### Authentication
+
+By default this will serve files without needing a login.
+
+You can set a single username and password with the --user and --pass flags.
+
+### Directory Cache
+
+Using the `--dir-cache-time` flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made locally in the mount may appear immediately or
+invalidate the cache. However, changes done on the remote will only
+be picked up once the cache expires.
+
+Alternatively, you can send a `SIGHUP` signal to rclone for
+it to flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+### File Buffering
+
+The `--buffer-size` flag determines the amount of memory,
+that will be used to buffer data in advance.
+
+Each open file descriptor will try to keep the specified amount of
+data in memory at all times. The buffered data is bound to one file
+descriptor and won't be shared between multiple open file descriptors
+of the same file.
+
+This flag is a upper limit for the used memory per file descriptor.
+The buffer will only use memory for data that is downloaded but not
+not yet read. If the buffer is empty, only a small amount of memory
+will be used.
+The maximum memory used by rclone for buffering can be up to
+`--buffer-size * open files`.
+
+### File Caching
+
+These flags control the VFS file caching options. The VFS layer is
+used by rclone mount to make a cloud storage system work more like a
+normal file system.
+
+You'll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed so if rclone is quit or dies with open files then these won't
+get written back to the remote. However they will still be in the on
+disk cache.
+
+#### --vfs-cache-mode off
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+#### --vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+a file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at
+the cache backend which does a much more sophisticated job of caching,
+including caching directory hierarchies and chunks of files.
+
+In this mode, unlike the others, when a file is written to the disk,
+it will be kept on the disk after it is written to the remote. It
+will be purged on a schedule according to `--vfs-cache-max-age`.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+
+```
+rclone serve ftp remote:path [flags]
+```
+
+### Options
+
+```
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for ftp
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --pass string Password for authentication. (empty value allow every password)
+ --passive-port string Passive port range to use. (default "30000-32000")
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication. (default "anonymous")
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
+```
+
## rclone serve http
Serve the remote over HTTP.
@@ -2515,8 +2697,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching
-**NB** File caching is **EXPERIMENTAL** - use with care!
-
These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a
normal file system.
@@ -2911,8 +3091,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching
-**NB** File caching is **EXPERIMENTAL** - use with care!
-
These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a
normal file system.
@@ -3035,6 +3213,46 @@ rclone serve webdav remote:path [flags]
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
```
+## rclone settier
+
+Changes storage class/tier of objects in remote.
+
+### Synopsis
+
+
+rclone settier changes storage tier or class at remote if supported.
+Few cloud storage services provides different storage classes on objects,
+for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive,
+Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
+
+Note that, certain tier chages make objects not available to access immediately.
+For example tiering to archive in azure blob storage makes objects in frozen state,
+user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object
+inaccessible.true
+
+You can use it to tier single object
+
+ rclone settier Cool remote:path/file
+
+Or use rclone filters to set tier on only specific files
+
+ rclone --include "*.txt" settier Hot remote:path/dir
+
+Or just provide remote directory and all files in directory will be tiered
+
+ rclone settier tier remote:path/dir
+
+
+```
+rclone settier tier remote:path [flags]
+```
+
+### Options
+
+```
+ -h, --help help for settier
+```
+
## rclone touch
Create new file or change file modification time.
@@ -3561,6 +3779,10 @@ Note that if you are using the `logrotate` program to manage rclone's
logs, then you should use the `copytruncate` option as rclone doesn't
have a signal to rotate logs.
+### --log-format LIST ###
+
+Comma separated list of log format options. `date`, `time`, `microseconds`, `longfile`, `shortfile`, `UTC`. The default is "`date`,`time`".
+
### --log-level LEVEL ###
This sets the log level for rclone. The default log level is `NOTICE`.
@@ -3673,7 +3895,7 @@ files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also
(eg the Google Drive client).
-### --P, --progress ###
+### -P, --progress ###
This flag makes rclone update the stats in a static block in the
terminal providing a realtime overview of the transfer.
@@ -3688,6 +3910,10 @@ with the `--stats` flag.
This can be used with the `--stats-one-line` flag for a simpler
display.
+Note: On Windows until[this bug](https://github.com/Azure/go-ansiterm/issues/26)
+is fixed all non-ASCII characters will be replaced with `.` when
+`--progress` is in use.
+
### -q, --quiet ###
Normally rclone outputs stats and a completion message. If you set
@@ -3842,7 +4068,8 @@ will be considered.
If the destination does not support server-side copy or move, rclone
will fall back to the default behaviour and log an error level message
-to the console.
+to the console. Note: Encrypted destinations are not supported
+by `--track-renames`.
Note that `--track-renames` uses extra memory to keep track of all
the rename candidates.
@@ -4908,6 +5135,33 @@ Eg
rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true
+### cache/fetch: Fetch file chunks
+
+Ensure the specified file chunks are cached on disk.
+
+The chunks= parameter specifies the file chunks to check.
+It takes a comma separated list of array slice indices.
+The slice indices are similar to Python slices: start[:end]
+
+start is the 0 based chunk number from the beginning of the file
+to fetch inclusive. end is 0 based chunk number from the beginning
+of the file to fetch exclisive.
+Both values can be negative, in which case they count from the back
+of the file. The value "-5:" represents the last 5 chunks of a file.
+
+Some valid examples are:
+":5,-5:" -> the first and last five chunks
+"0,-2" -> the first and the second last chunk
+"0:10" -> the first ten chunks
+
+Any parameter with a key that starts with "file" can be used to
+specify files to fetch, eg
+
+ rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
+
+File names will automatically be encrypted when the a crypt remote
+is used on top of the cache.
+
### cache/stats: Get cache stats
Show statistics for the cache remote.
@@ -4960,6 +5214,8 @@ Returns the following values:
"speed": average speed in bytes/sec since start of the process,
"bytes": total transferred bytes since the start of the process,
"errors": number of errors,
+ "fatalError": whether there has been at least one FatalError,
+ "retryError": whether there has been at least one non-NoRetryError,
"checks": number of checked files,
"transfers": number of transferred files,
"deletes" : number of deleted files,
@@ -5016,6 +5272,28 @@ starting with dir will forget that dir, eg
rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
+### vfs/poll-interval: Get the status or update the value of the poll-interval option.
+
+Without any parameter given this returns the current status of the
+poll-interval setting.
+
+When the interval=duration parameter is set, the poll-interval value
+is updated and the polling function is notified.
+Setting interval=0 disables poll-interval.
+
+ rclone rc vfs/poll-interval interval=5m
+
+The timeout=duration parameter can be used to specify a time to wait
+for the current poll function to apply the new value.
+If timeout is less or equal 0, which is the default, wait indefinitely.
+
+The new poll-interval value will only be active when the timeout is
+not reached.
+
+If poll-interval is updated or disabled temporarily, some changes
+might not get picked up by the polling function, depending on the
+used remote.
+
### vfs/refresh: Refresh the directory cache.
This reads the directories for the specified paths and freshens the
@@ -5056,6 +5334,9 @@ If an error occurs then there will be an HTTP error status (usually
400) and the body of the response will contain a JSON encoded error
object.
+The sever implements basic CORS support and allows all origins for that.
+The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back.
+
### Using POST with URL parameters only
```
@@ -5331,17 +5612,17 @@ operations more efficient.
| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Amazon S3 | No | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Backblaze B2 | No | No | No | No | Yes | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
-| Box | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
+| Box | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | Yes | No |
| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | Yes | Yes |
| FTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| HTTP | No | No | No | No | No | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Hubic | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
-| Jottacloud | Yes | Yes | Yes | Yes | No | No | No | No | No |
+| Jottacloud | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes |
| Mega | Yes | No | Yes | Yes | No | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
-| Microsoft OneDrive | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
+| Microsoft OneDrive | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | Yes | Yes |
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No |
| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
@@ -5545,6 +5826,23 @@ Copy another local directory to the alias directory called source
rclone copy /home/source remote:source
+
+### Standard Options
+
+Here are the standard options specific to alias (Alias for a existing remote).
+
+#### --alias-remote
+
+Remote or path to alias.
+Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
+
+- Config: remote
+- Env Var: RCLONE_ALIAS_REMOTE
+- Type: string
+- Default: ""
+
+
+
Amazon Drive
-----------------------------------------
@@ -5714,23 +6012,65 @@ Let's say you usually use `amazon.co.uk`. When you authenticate with
rclone it will take you to an `amazon.com` page to log in. Your
`amazon.co.uk` email and password should work here just fine.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to amazon cloud drive (Amazon Drive).
-#### --acd-templink-threshold=SIZE ####
+#### --acd-client-id
-Files this size or more will be downloaded via their `tempLink`. This
-is to work around a problem with Amazon Drive which blocks downloads
-of files bigger than about 10GB. The default for this is 9GB which
-shouldn't need to be changed.
+Amazon Application Client ID.
-To download files above this threshold, rclone requests a `tempLink`
-which downloads the file through a temporary URL directly from the
-underlying S3 storage.
+- Config: client_id
+- Env Var: RCLONE_ACD_CLIENT_ID
+- Type: string
+- Default: ""
-#### --acd-upload-wait-per-gb=TIME ####
+#### --acd-client-secret
+
+Amazon Application Client Secret.
+
+- Config: client_secret
+- Env Var: RCLONE_ACD_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to amazon cloud drive (Amazon Drive).
+
+#### --acd-auth-url
+
+Auth server URL.
+Leave blank to use Amazon's.
+
+- Config: auth_url
+- Env Var: RCLONE_ACD_AUTH_URL
+- Type: string
+- Default: ""
+
+#### --acd-token-url
+
+Token server url.
+leave blank to use Amazon's.
+
+- Config: token_url
+- Env Var: RCLONE_ACD_TOKEN_URL
+- Type: string
+- Default: ""
+
+#### --acd-checkpoint
+
+Checkpoint for internal polling (debug).
+
+- Config: checkpoint
+- Env Var: RCLONE_ACD_CHECKPOINT
+- Type: string
+- Default: ""
+
+#### --acd-upload-wait-per-gb
+
+Additional time per GB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
@@ -5749,9 +6089,34 @@ most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads
of big files for a range of file sizes.
-Upload with the `-v` flag to see more info about what rclone is doing
+Upload with the "-v" flag to see more info about what rclone is doing
in this situation.
+- Config: upload_wait_per_gb
+- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
+- Type: Duration
+- Default: 3m0s
+
+#### --acd-templink-threshold
+
+Files >= this size will be downloaded via their tempLink.
+
+Files this size or more will be downloaded via their "tempLink". This
+is to work around a problem with Amazon Drive which blocks downloads
+of files bigger than about 10GB. The default for this is 9GB which
+shouldn't need to be changed.
+
+To download files above this threshold, rclone requests a "tempLink"
+which downloads the file through a temporary URL directly from the
+underlying S3 storage.
+
+- Config: templink_threshold
+- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
+- Type: SizeSuffix
+- Default: 9G
+
+
+
### Limitations ###
Note that Amazon Drive is case insensitive so you can't have a
@@ -6141,56 +6506,545 @@ tries to access the data you will see an error like below.
In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
the object(s) in question before using rclone.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
-#### --s3-acl=STRING ####
+#### --s3-provider
+
+Choose your S3 provider.
+
+- Config: provider
+- Env Var: RCLONE_S3_PROVIDER
+- Type: string
+- Default: ""
+- Examples:
+ - "AWS"
+ - Amazon Web Services (AWS) S3
+ - "Ceph"
+ - Ceph Object Storage
+ - "DigitalOcean"
+ - Digital Ocean Spaces
+ - "Dreamhost"
+ - Dreamhost DreamObjects
+ - "IBMCOS"
+ - IBM COS S3
+ - "Minio"
+ - Minio Object Storage
+ - "Wasabi"
+ - Wasabi Object Storage
+ - "Other"
+ - Any other S3 compatible provider
+
+#### --s3-env-auth
+
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+
+- Config: env_auth
+- Env Var: RCLONE_S3_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter AWS credentials in the next step
+ - "true"
+ - Get AWS credentials from the environment (env vars or IAM)
+
+#### --s3-access-key-id
+
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+
+- Config: access_key_id
+- Env Var: RCLONE_S3_ACCESS_KEY_ID
+- Type: string
+- Default: ""
+
+#### --s3-secret-access-key
+
+AWS Secret Access Key (password)
+Leave blank for anonymous access or runtime credentials.
+
+- Config: secret_access_key
+- Env Var: RCLONE_S3_SECRET_ACCESS_KEY
+- Type: string
+- Default: ""
+
+#### --s3-region
+
+Region to connect to.
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Type: string
+- Default: ""
+- Examples:
+ - "us-east-1"
+ - The default endpoint - a good choice if you are unsure.
+ - US Region, Northern Virginia or Pacific Northwest.
+ - Leave location constraint empty.
+ - "us-east-2"
+ - US East (Ohio) Region
+ - Needs location constraint us-east-2.
+ - "us-west-2"
+ - US West (Oregon) Region
+ - Needs location constraint us-west-2.
+ - "us-west-1"
+ - US West (Northern California) Region
+ - Needs location constraint us-west-1.
+ - "ca-central-1"
+ - Canada (Central) Region
+ - Needs location constraint ca-central-1.
+ - "eu-west-1"
+ - EU (Ireland) Region
+ - Needs location constraint EU or eu-west-1.
+ - "eu-west-2"
+ - EU (London) Region
+ - Needs location constraint eu-west-2.
+ - "eu-central-1"
+ - EU (Frankfurt) Region
+ - Needs location constraint eu-central-1.
+ - "ap-southeast-1"
+ - Asia Pacific (Singapore) Region
+ - Needs location constraint ap-southeast-1.
+ - "ap-southeast-2"
+ - Asia Pacific (Sydney) Region
+ - Needs location constraint ap-southeast-2.
+ - "ap-northeast-1"
+ - Asia Pacific (Tokyo) Region
+ - Needs location constraint ap-northeast-1.
+ - "ap-northeast-2"
+ - Asia Pacific (Seoul)
+ - Needs location constraint ap-northeast-2.
+ - "ap-south-1"
+ - Asia Pacific (Mumbai)
+ - Needs location constraint ap-south-1.
+ - "sa-east-1"
+ - South America (Sao Paulo) Region
+ - Needs location constraint sa-east-1.
+
+#### --s3-region
+
+Region to connect to.
+Leave blank if you are using an S3 clone and you don't have a region.
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Use this if unsure. Will use v4 signatures and an empty region.
+ - "other-v2-signature"
+ - Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+
+#### --s3-endpoint
+
+Endpoint for S3 API.
+Leave blank if using AWS to use the default endpoint for the region.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+
+#### --s3-endpoint
+
+Endpoint for IBM COS S3 API.
+Specify if using an IBM COS On Premise.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+- Examples:
+ - "s3-api.us-geo.objectstorage.softlayer.net"
+ - US Cross Region Endpoint
+ - "s3-api.dal.us-geo.objectstorage.softlayer.net"
+ - US Cross Region Dallas Endpoint
+ - "s3-api.wdc-us-geo.objectstorage.softlayer.net"
+ - US Cross Region Washington DC Endpoint
+ - "s3-api.sjc-us-geo.objectstorage.softlayer.net"
+ - US Cross Region San Jose Endpoint
+ - "s3-api.us-geo.objectstorage.service.networklayer.com"
+ - US Cross Region Private Endpoint
+ - "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
+ - US Cross Region Dallas Private Endpoint
+ - "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
+ - US Cross Region Washington DC Private Endpoint
+ - "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
+ - US Cross Region San Jose Private Endpoint
+ - "s3.us-east.objectstorage.softlayer.net"
+ - US Region East Endpoint
+ - "s3.us-east.objectstorage.service.networklayer.com"
+ - US Region East Private Endpoint
+ - "s3.us-south.objectstorage.softlayer.net"
+ - US Region South Endpoint
+ - "s3.us-south.objectstorage.service.networklayer.com"
+ - US Region South Private Endpoint
+ - "s3.eu-geo.objectstorage.softlayer.net"
+ - EU Cross Region Endpoint
+ - "s3.fra-eu-geo.objectstorage.softlayer.net"
+ - EU Cross Region Frankfurt Endpoint
+ - "s3.mil-eu-geo.objectstorage.softlayer.net"
+ - EU Cross Region Milan Endpoint
+ - "s3.ams-eu-geo.objectstorage.softlayer.net"
+ - EU Cross Region Amsterdam Endpoint
+ - "s3.eu-geo.objectstorage.service.networklayer.com"
+ - EU Cross Region Private Endpoint
+ - "s3.fra-eu-geo.objectstorage.service.networklayer.com"
+ - EU Cross Region Frankfurt Private Endpoint
+ - "s3.mil-eu-geo.objectstorage.service.networklayer.com"
+ - EU Cross Region Milan Private Endpoint
+ - "s3.ams-eu-geo.objectstorage.service.networklayer.com"
+ - EU Cross Region Amsterdam Private Endpoint
+ - "s3.eu-gb.objectstorage.softlayer.net"
+ - Great Britan Endpoint
+ - "s3.eu-gb.objectstorage.service.networklayer.com"
+ - Great Britan Private Endpoint
+ - "s3.ap-geo.objectstorage.softlayer.net"
+ - APAC Cross Regional Endpoint
+ - "s3.tok-ap-geo.objectstorage.softlayer.net"
+ - APAC Cross Regional Tokyo Endpoint
+ - "s3.hkg-ap-geo.objectstorage.softlayer.net"
+ - APAC Cross Regional HongKong Endpoint
+ - "s3.seo-ap-geo.objectstorage.softlayer.net"
+ - APAC Cross Regional Seoul Endpoint
+ - "s3.ap-geo.objectstorage.service.networklayer.com"
+ - APAC Cross Regional Private Endpoint
+ - "s3.tok-ap-geo.objectstorage.service.networklayer.com"
+ - APAC Cross Regional Tokyo Private Endpoint
+ - "s3.hkg-ap-geo.objectstorage.service.networklayer.com"
+ - APAC Cross Regional HongKong Private Endpoint
+ - "s3.seo-ap-geo.objectstorage.service.networklayer.com"
+ - APAC Cross Regional Seoul Private Endpoint
+ - "s3.mel01.objectstorage.softlayer.net"
+ - Melbourne Single Site Endpoint
+ - "s3.mel01.objectstorage.service.networklayer.com"
+ - Melbourne Single Site Private Endpoint
+ - "s3.tor01.objectstorage.softlayer.net"
+ - Toronto Single Site Endpoint
+ - "s3.tor01.objectstorage.service.networklayer.com"
+ - Toronto Single Site Private Endpoint
+
+#### --s3-endpoint
+
+Endpoint for S3 API.
+Required when using an S3 clone.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+- Examples:
+ - "objects-us-west-1.dream.io"
+ - Dream Objects endpoint
+ - "nyc3.digitaloceanspaces.com"
+ - Digital Ocean Spaces New York 3
+ - "ams3.digitaloceanspaces.com"
+ - Digital Ocean Spaces Amsterdam 3
+ - "sgp1.digitaloceanspaces.com"
+ - Digital Ocean Spaces Singapore 1
+ - "s3.wasabisys.com"
+ - Wasabi Object Storage
+
+#### --s3-location-constraint
+
+Location constraint - must be set to match the Region.
+Used when creating buckets only.
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Empty for US Region, Northern Virginia or Pacific Northwest.
+ - "us-east-2"
+ - US East (Ohio) Region.
+ - "us-west-2"
+ - US West (Oregon) Region.
+ - "us-west-1"
+ - US West (Northern California) Region.
+ - "ca-central-1"
+ - Canada (Central) Region.
+ - "eu-west-1"
+ - EU (Ireland) Region.
+ - "eu-west-2"
+ - EU (London) Region.
+ - "EU"
+ - EU Region.
+ - "ap-southeast-1"
+ - Asia Pacific (Singapore) Region.
+ - "ap-southeast-2"
+ - Asia Pacific (Sydney) Region.
+ - "ap-northeast-1"
+ - Asia Pacific (Tokyo) Region.
+ - "ap-northeast-2"
+ - Asia Pacific (Seoul)
+ - "ap-south-1"
+ - Asia Pacific (Mumbai)
+ - "sa-east-1"
+ - South America (Sao Paulo) Region.
+
+#### --s3-location-constraint
+
+Location constraint - must match endpoint when using IBM Cloud Public.
+For on-prem COS, do not make a selection from this list, hit enter
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+- Examples:
+ - "us-standard"
+ - US Cross Region Standard
+ - "us-vault"
+ - US Cross Region Vault
+ - "us-cold"
+ - US Cross Region Cold
+ - "us-flex"
+ - US Cross Region Flex
+ - "us-east-standard"
+ - US East Region Standard
+ - "us-east-vault"
+ - US East Region Vault
+ - "us-east-cold"
+ - US East Region Cold
+ - "us-east-flex"
+ - US East Region Flex
+ - "us-south-standard"
+ - US Sout hRegion Standard
+ - "us-south-vault"
+ - US South Region Vault
+ - "us-south-cold"
+ - US South Region Cold
+ - "us-south-flex"
+ - US South Region Flex
+ - "eu-standard"
+ - EU Cross Region Standard
+ - "eu-vault"
+ - EU Cross Region Vault
+ - "eu-cold"
+ - EU Cross Region Cold
+ - "eu-flex"
+ - EU Cross Region Flex
+ - "eu-gb-standard"
+ - Great Britan Standard
+ - "eu-gb-vault"
+ - Great Britan Vault
+ - "eu-gb-cold"
+ - Great Britan Cold
+ - "eu-gb-flex"
+ - Great Britan Flex
+ - "ap-standard"
+ - APAC Standard
+ - "ap-vault"
+ - APAC Vault
+ - "ap-cold"
+ - APAC Cold
+ - "ap-flex"
+ - APAC Flex
+ - "mel01-standard"
+ - Melbourne Standard
+ - "mel01-vault"
+ - Melbourne Vault
+ - "mel01-cold"
+ - Melbourne Cold
+ - "mel01-flex"
+ - Melbourne Flex
+ - "tor01-standard"
+ - Toronto Standard
+ - "tor01-vault"
+ - Toronto Vault
+ - "tor01-cold"
+ - Toronto Cold
+ - "tor01-flex"
+ - Toronto Flex
+
+#### --s3-location-constraint
+
+Location constraint - must be set to match the Region.
+Leave blank if not sure. Used when creating buckets only.
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+
+#### --s3-acl
Canned ACL used when creating buckets and/or storing objects in S3.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
-For more info visit the [canned ACL docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl).
+- Config: acl
+- Env Var: RCLONE_S3_ACL
+- Type: string
+- Default: ""
+- Examples:
+ - "private"
+ - Owner gets FULL_CONTROL. No one else has access rights (default).
+ - "public-read"
+ - Owner gets FULL_CONTROL. The AllUsers group gets READ access.
+ - "public-read-write"
+ - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
+ - Granting this on a bucket is generally not recommended.
+ - "authenticated-read"
+ - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
+ - "bucket-owner-read"
+ - Object owner gets FULL_CONTROL. Bucket owner gets READ access.
+ - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+ - "bucket-owner-full-control"
+ - Both the object owner and the bucket owner get FULL_CONTROL over the object.
+ - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+ - "private"
+ - Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
+ - "public-read"
+ - Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
+ - "public-read-write"
+ - Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
+ - "authenticated-read"
+ - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
-#### --s3-storage-class=STRING ####
+#### --s3-server-side-encryption
-Storage class to upload new objects with.
+The server-side encryption algorithm used when storing this object in S3.
-Available options include:
+- Config: server_side_encryption
+- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - None
+ - "AES256"
+ - AES256
+ - "aws:kms"
+ - aws:kms
- - STANDARD - default storage class
- - STANDARD_IA - for less frequently accessed data (e.g backups)
- - ONEZONE_IA - for storing data in only one Availability Zone
- - REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy)
+#### --s3-sse-kms-key-id
-#### --s3-chunk-size=SIZE ####
+If using KMS ID you must provide the ARN of Key.
+
+- Config: sse_kms_key_id
+- Env Var: RCLONE_S3_SSE_KMS_KEY_ID
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - None
+ - "arn:aws:kms:us-east-1:*"
+ - arn:aws:kms:*
+
+#### --s3-storage-class
+
+The storage class to use when storing new objects in S3.
+
+- Config: storage_class
+- Env Var: RCLONE_S3_STORAGE_CLASS
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Default
+ - "STANDARD"
+ - Standard storage class
+ - "REDUCED_REDUNDANCY"
+ - Reduced redundancy storage class
+ - "STANDARD_IA"
+ - Standard Infrequent Access storage class
+ - "ONEZONE_IA"
+ - One Zone Infrequent Access storage class
+
+### Advanced Options
+
+Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
+
+#### --s3-chunk-size
+
+Chunk size to use for uploading.
Any files larger than this will be uploaded in chunks of this
size. The default is 5MB. The minimum is 5MB.
-Note that 2 chunks of this size are buffered in memory per transfer.
+Note that "--s3-upload-concurrency" chunks of this size are buffered
+in memory per transfer.
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.
-#### --s3-force-path-style=BOOL ####
+- Config: chunk_size
+- Env Var: RCLONE_S3_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5M
+
+#### --s3-disable-checksum
+
+Don't store MD5 checksum with object metadata
+
+- Config: disable_checksum
+- Env Var: RCLONE_S3_DISABLE_CHECKSUM
+- Type: bool
+- Default: false
+
+#### --s3-session-token
+
+An AWS session token
+
+- Config: session_token
+- Env Var: RCLONE_S3_SESSION_TOKEN
+- Type: string
+- Default: ""
+
+#### --s3-upload-concurrency
+
+Concurrency for multipart uploads.
+
+This is the number of chunks of the same file that are uploaded
+concurrently.
+
+If you are uploading small numbers of large file over high speed link
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+- Config: upload_concurrency
+- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 2
+
+#### --s3-force-path-style
+
+If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access,
if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
-Some providers (eg Aliyun OSS or Netease COS) require this set to
-`false`. It can also be set in the config in the advanced section.
+Some providers (eg Aliyun OSS or Netease COS) require this set to false.
-#### --s3-upload-concurrency ####
+- Config: force_path_style
+- Env Var: RCLONE_S3_FORCE_PATH_STYLE
+- Type: bool
+- Default: true
-Number of chunks of the same file that are uploaded concurrently.
-Default is 2.
+#### --s3-v2-auth
-If you are uploading small amount of large file over high speed link
-and these uploads do not fully utilize your bandwidth, then increasing
-this may help to speed up the transfers.
+If true use v2 authentication.
+
+If this is false (the default) then rclone will use v4 authentication.
+If it is set then rclone will use v2 authentication.
+
+Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+
+- Config: v2_auth
+- Env Var: RCLONE_S3_V2_AUTH
+- Type: bool
+- Default: false
+
+
### Anonymous access to public buckets ###
@@ -6974,6 +7828,9 @@ versions of files, leaving the current ones intact. You can also
supply a path and only old versions under that path will be deleted,
eg `rclone cleanup remote:bucket/path/to/stuff`.
+Note that `cleanup` does not remove partially uploaded files
+from the bucket.
+
When you `purge` a bucket, the current and the old versions will be
deleted then the bucket will be deleted.
@@ -7055,46 +7912,10 @@ start and finish the upload) and another 2 requests for each chunk:
/b2api/v1/b2_finish_large_file
```
-### Specific options ###
+#### Versions ####
-Here are the command line options specific to this cloud storage
-system.
-
-#### --b2-chunk-size valuee=SIZE ####
-
-When uploading large files chunk the file into this size. Note that
-these chunks are buffered in memory and there might a maximum of
-`--transfers` chunks in progress at once. 5,000,000 Bytes is the
-minimim size (default 96M).
-
-#### --b2-upload-cutoff=SIZE ####
-
-Cutoff for switching to chunked upload (default 190.735 MiB == 200
-MB). Files above this size will be uploaded in chunks of
-`--b2-chunk-size`.
-
-This value should be set no larger than 4.657GiB (== 5GB) as this is
-the largest file size that can be uploaded.
-
-
-#### --b2-test-mode=FLAG ####
-
-This is for debugging purposes only.
-
-Setting FLAG to one of the strings below will cause b2 to return
-specific errors for debugging purposes.
-
- * `fail_some_uploads`
- * `expire_some_account_authorization_tokens`
- * `force_cap_exceeded`
-
-These will be set in the `X-Bz-Test-Mode` header which is documented
-in the [b2 integrations
-checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
-
-#### --b2-versions ####
-
-When set rclone will show and act on older versions of files. For example
+Versions can be viewd with the `--b2-versions` flag. When it is set
+rclone will show and act on older versions of files. For example
Listing without `--b2-versions`
@@ -7120,6 +7941,111 @@ server to the nearest millisecond appended to them.
Note that when using `--b2-versions` no file write operations are
permitted, so you can't upload files or delete them.
+
+### Standard Options
+
+Here are the standard options specific to b2 (Backblaze B2).
+
+#### --b2-account
+
+Account ID or Application Key ID
+
+- Config: account
+- Env Var: RCLONE_B2_ACCOUNT
+- Type: string
+- Default: ""
+
+#### --b2-key
+
+Application Key
+
+- Config: key
+- Env Var: RCLONE_B2_KEY
+- Type: string
+- Default: ""
+
+#### --b2-hard-delete
+
+Permanently delete files on remote removal, otherwise hide files.
+
+- Config: hard_delete
+- Env Var: RCLONE_B2_HARD_DELETE
+- Type: bool
+- Default: false
+
+### Advanced Options
+
+Here are the advanced options specific to b2 (Backblaze B2).
+
+#### --b2-endpoint
+
+Endpoint for the service.
+Leave blank normally.
+
+- Config: endpoint
+- Env Var: RCLONE_B2_ENDPOINT
+- Type: string
+- Default: ""
+
+#### --b2-test-mode
+
+A flag string for X-Bz-Test-Mode header for debugging.
+
+This is for debugging purposes only. Setting it to one of the strings
+below will cause b2 to return specific errors:
+
+ * "fail_some_uploads"
+ * "expire_some_account_authorization_tokens"
+ * "force_cap_exceeded"
+
+These will be set in the "X-Bz-Test-Mode" header which is documented
+in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
+
+- Config: test_mode
+- Env Var: RCLONE_B2_TEST_MODE
+- Type: string
+- Default: ""
+
+#### --b2-versions
+
+Include old versions in directory listings.
+Note that when using this no file write operations are permitted,
+so you can't upload files or delete them.
+
+- Config: versions
+- Env Var: RCLONE_B2_VERSIONS
+- Type: bool
+- Default: false
+
+#### --b2-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Files above this size will be uploaded in chunks of "--b2-chunk-size".
+
+This value should be set no larger than 4.657GiB (== 5GB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_B2_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200M
+
+#### --b2-chunk-size
+
+Upload chunk size. Must fit in memory.
+
+When uploading large files, chunk the file into this size. Note that
+these chunks are buffered in memory and there might a maximum of
+"--transfers" chunks in progress at once. 5,000,000 Bytes is the
+minimim size.
+
+- Config: chunk_size
+- Env Var: RCLONE_B2_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 96M
+
+
+
Box
-----------------------------------------
@@ -7333,19 +8259,54 @@ normally 8MB so increasing `--transfers` will increase memory use.
Depending on the enterprise settings for your user, the item will
either be actually deleted from Box or moved to the trash.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to box (Box).
-#### --box-upload-cutoff=SIZE ####
+#### --box-client-id
-Cutoff for switching to chunked upload - must be >= 50MB. The default
-is 50MB.
+Box App Client Id.
+Leave blank normally.
-#### --box-commit-retries int ####
+- Config: client_id
+- Env Var: RCLONE_BOX_CLIENT_ID
+- Type: string
+- Default: ""
-Max number of times to try committing a multipart file. (default 100)
+#### --box-client-secret
+
+Box App Client Secret
+Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_BOX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to box (Box).
+
+#### --box-upload-cutoff
+
+Cutoff for switching to multipart upload (>= 50MB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 50M
+
+#### --box-commit-retries
+
+Max number of times to try committing a multipart file.
+
+- Config: commit_retries
+- Env Var: RCLONE_BOX_COMMIT_RETRIES
+- Type: int
+- Default: 100
+
+
### Limitations ###
@@ -7486,7 +8447,8 @@ to the cloud provider without interrupting the reading (small blip can happen th
Files are uploaded in sequence and only one file is uploaded at a time.
Uploads will be stored in a queue and be processed based on the order they were added.
-The queue and the temporary storage is persistent across restarts and even purges of the cache.
+The queue and the temporary storage is persistent across restarts but
+can be cleared on startup with the `--cache-db-purge` flag.
### Write Support ###
@@ -7534,6 +8496,28 @@ and password) in your remote and it will be automatically enabled.
Affected settings:
- `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times
+##### Certificate Validation #####
+
+When the Plex server is configured to only accept secure connections, it is
+possible to use `.plex.direct` URL's to ensure certificate validation succeeds.
+These URL's are used by Plex internally to connect to the Plex server securely.
+
+The format for this URL's is the following:
+
+https://ip-with-dots-replaced.server-hash.plex.direct:32400/
+
+The `ip-with-dots-replaced` part can be any IPv4 address, where the dots
+have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`.
+
+To get the `server-hash` part, the easiest way is to visit
+
+https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
+
+This page will list all the available Plex servers for your account
+with at least one `.plex.direct` link for each. Copy one URL and replace
+the IP address with the desired address. This can be used as the
+`plex_url` value.
+
### Known issues ###
#### Mount and --dir-cache-time ####
@@ -7595,6 +8579,19 @@ which makes it think we're downloading the full file instead of small chunks.
Organizing the remotes in this order yelds better results:
**cloud remote** -> **cache** -> **crypt**
+#### absolute remote paths ####
+
+`cache` can not differentiate between relative and absolute paths for the wrapped remote.
+Any path given in the `remote` config setting and on the command line will be passed to
+the wrapped remote as is, but for storing the chunks on disk the path will be made
+relative by removing any leading `/` character.
+
+This behavior is irrelevant for most backend types, but there are backends where a leading `/`
+changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are
+relative to the root of the SSH server and paths without are relative to the user home directory.
+As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent
+a different directory on the SSH server.
+
### Cache and Remote Control (--rc) ###
Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points:
By default, the listener is disabled if you do not add the flag.
@@ -7607,107 +8604,221 @@ Params:
- **remote** = path to remote **(required)**
- **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to cache (Cache a remote).
-#### --cache-db-path=PATH ####
+#### --cache-remote
-Path to where the file structure metadata (DB) is stored locally. The remote
-name is used as the DB file name.
+Remote to cache.
+Normally should contain a ':' and a path, eg "myremote:path/to/dir",
+"myremote:bucket" or maybe "myremote:" (not recommended).
-**Default**: /cache-backend/
-**Example**: /.cache/cache-backend/test-cache
+- Config: remote
+- Env Var: RCLONE_CACHE_REMOTE
+- Type: string
+- Default: ""
-#### --cache-chunk-path=PATH ####
+#### --cache-plex-url
-Path to where partial file data (chunks) is stored locally. The remote
-name is appended to the final path.
+The URL of the Plex server
-This config follows the `--cache-db-path`. If you specify a custom
-location for `--cache-db-path` and don't specify one for `--cache-chunk-path`
-then `--cache-chunk-path` will use the same path as `--cache-db-path`.
+- Config: plex_url
+- Env Var: RCLONE_CACHE_PLEX_URL
+- Type: string
+- Default: ""
-**Default**: /cache-backend/
-**Example**: /.cache/cache-backend/test-cache
+#### --cache-plex-username
-#### --cache-db-purge ####
+The username of the Plex user
-Flag to clear all the cached data for this remote before.
+- Config: plex_username
+- Env Var: RCLONE_CACHE_PLEX_USERNAME
+- Type: string
+- Default: ""
-**Default**: not set
+#### --cache-plex-password
-#### --cache-chunk-size=SIZE ####
+The password of the Plex user
-The size of a chunk (partial file data). Use lower numbers for slower
-connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.
+- Config: plex_password
+- Env Var: RCLONE_CACHE_PLEX_PASSWORD
+- Type: string
+- Default: ""
-**Default**: 5M
+#### --cache-chunk-size
-#### --cache-total-chunk-size=SIZE ####
+The size of a chunk (partial file data).
-The total size that the chunks can take up on the local disk. If `cache`
-exceeds this value then it will start to the delete the oldest chunks until
-it goes under this value.
+Use lower numbers for slower connections. If the chunk size is
+changed, any downloaded chunks will be invalid and cache-chunk-path
+will need to be cleared or unexpected EOF errors will occur.
-**Default**: 10G
+- Config: chunk_size
+- Env Var: RCLONE_CACHE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5M
+- Examples:
+ - "1m"
+ - 1MB
+ - "5M"
+ - 5 MB
+ - "10M"
+ - 10 MB
-#### --cache-chunk-clean-interval=DURATION ####
+#### --cache-info-age
-How often should `cache` perform cleanups of the chunk storage. The default value
-should be ok for most people. If you find that `cache` goes over `cache-total-chunk-size`
-too often then try to lower this value to force it to perform cleanups more often.
-
-**Default**: 1m
-
-#### --cache-info-age=DURATION ####
-
-How long to keep file structure information (directory listings, file size,
-mod times etc) locally.
-
-If all write operations are done through `cache` then you can safely make
+How long to cache file structure information (directory listings, file size, times etc).
+If all write operations are done through the cache then you can safely make
this value very large as the cache store will also be updated in real time.
-**Default**: 6h
+- Config: info_age
+- Env Var: RCLONE_CACHE_INFO_AGE
+- Type: Duration
+- Default: 6h0m0s
+- Examples:
+ - "1h"
+ - 1 hour
+ - "24h"
+ - 24 hours
+ - "48h"
+ - 48 hours
-#### --cache-read-retries=RETRIES ####
+#### --cache-chunk-total-size
+
+The total size that the chunks can take up on the local disk.
+
+If the cache exceeds this value then it will start to delete the
+oldest chunks until it goes under this value.
+
+- Config: chunk_total_size
+- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
+- Type: SizeSuffix
+- Default: 10G
+- Examples:
+ - "500M"
+ - 500 MB
+ - "1G"
+ - 1 GB
+ - "10G"
+ - 10 GB
+
+### Advanced Options
+
+Here are the advanced options specific to cache (Cache a remote).
+
+#### --cache-plex-token
+
+The plex token for authentication - auto set normally
+
+- Config: plex_token
+- Env Var: RCLONE_CACHE_PLEX_TOKEN
+- Type: string
+- Default: ""
+
+#### --cache-plex-insecure
+
+Skip all certificate verifications when connecting to the Plex server
+
+- Config: plex_insecure
+- Env Var: RCLONE_CACHE_PLEX_INSECURE
+- Type: string
+- Default: ""
+
+#### --cache-db-path
+
+Directory to store file structure metadata DB.
+The remote name is used as the DB file name.
+
+- Config: db_path
+- Env Var: RCLONE_CACHE_DB_PATH
+- Type: string
+- Default: "/home/ncw/.cache/rclone/cache-backend"
+
+#### --cache-chunk-path
+
+Directory to cache chunk files.
+
+Path to where partial file data (chunks) are stored locally. The remote
+name is appended to the final path.
+
+This config follows the "--cache-db-path". If you specify a custom
+location for "--cache-db-path" and don't specify one for "--cache-chunk-path"
+then "--cache-chunk-path" will use the same path as "--cache-db-path".
+
+- Config: chunk_path
+- Env Var: RCLONE_CACHE_CHUNK_PATH
+- Type: string
+- Default: "/home/ncw/.cache/rclone/cache-backend"
+
+#### --cache-db-purge
+
+Clear all the cached data for this remote on start.
+
+- Config: db_purge
+- Env Var: RCLONE_CACHE_DB_PURGE
+- Type: bool
+- Default: false
+
+#### --cache-chunk-clean-interval
+
+How often should the cache perform cleanups of the chunk storage.
+The default value should be ok for most people. If you find that the
+cache goes over "cache-chunk-total-size" too often then try to lower
+this value to force it to perform cleanups more often.
+
+- Config: chunk_clean_interval
+- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
+- Type: Duration
+- Default: 1m0s
+
+#### --cache-read-retries
How many times to retry a read from a cache storage.
-Since reading from a `cache` stream is independent from downloading file data,
-readers can get to a point where there's no more data in the cache.
-Most of the times this can indicate a connectivity issue if `cache` isn't
-able to provide file data anymore.
+Since reading from a cache stream is independent from downloading file
+data, readers can get to a point where there's no more data in the
+cache. Most of the times this can indicate a connectivity issue if
+cache isn't able to provide file data anymore.
For really slow connections, increase this to a point where the stream is
-able to provide data but your experience will be very stuttering.
+able to provide data but your experience will be very stuttering.
-**Default**: 10
+- Config: read_retries
+- Env Var: RCLONE_CACHE_READ_RETRIES
+- Type: int
+- Default: 10
-#### --cache-workers=WORKERS ####
+#### --cache-workers
How many workers should run in parallel to download chunks.
-Higher values will mean more parallel processing (better CPU needed) and
-more concurrent requests on the cloud provider.
-This impacts several aspects like the cloud provider API limits, more stress
-on the hardware that rclone runs on but it also means that streams will
-be more fluid and data will be available much more faster to readers.
+Higher values will mean more parallel processing (better CPU needed)
+and more concurrent requests on the cloud provider. This impacts
+several aspects like the cloud provider API limits, more stress on the
+hardware that rclone runs on but it also means that streams will be
+more fluid and data will be available much more faster to readers.
-**Note**: If the optional Plex integration is enabled then this setting
-will adapt to the type of reading performed and the value specified here will be used
-as a maximum number of workers to use.
-**Default**: 4
+**Note**: If the optional Plex integration is enabled then this
+setting will adapt to the type of reading performed and the value
+specified here will be used as a maximum number of workers to use.
-#### --cache-chunk-no-memory ####
+- Config: workers
+- Env Var: RCLONE_CACHE_WORKERS
+- Type: int
+- Default: 4
-By default, `cache` will keep file data during streaming in RAM as well
+#### --cache-chunk-no-memory
+
+Disable the in-memory cache for storing chunks during streaming.
+
+By default, cache will keep file data during streaming in RAM as well
to provide it to readers as fast as possible.
This transient data is evicted as soon as it is read and the number of
chunks stored doesn't exceed the number of workers. However, depending
-on other settings like `cache-chunk-size` and `cache-workers` this footprint
+on other settings like "cache-chunk-size" and "cache-workers" this footprint
can increase if there are parallel streams too (multiple files being read
at the same time).
@@ -7715,55 +8826,83 @@ If the hardware permits it, use this feature to provide an overall better
performance during streaming but it can also be disabled if RAM is not
available on the local machine.
-**Default**: not set
+- Config: chunk_no_memory
+- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
+- Type: bool
+- Default: false
-#### --cache-rps=NUMBER ####
+#### --cache-rps
-This setting places a hard limit on the number of requests per second that `cache`
-will be doing to the cloud provider remote and try to respect that value
-by setting waits between reads.
+Limits the number of requests per second to the source FS (-1 to disable)
-If you find that you're getting banned or limited on the cloud provider
-through cache and know that a smaller number of requests per second will
-allow you to work with it then you can use this setting for that.
+This setting places a hard limit on the number of requests per second
+that cache will be doing to the cloud provider remote and try to
+respect that value by setting waits between reads.
-A good balance of all the other settings should make this
-setting useless but it is available to set for more special cases.
+If you find that you're getting banned or limited on the cloud
+provider through cache and know that a smaller number of requests per
+second will allow you to work with it then you can use this setting
+for that.
-**NOTE**: This will limit the number of requests during streams but other
-API calls to the cloud provider like directory listings will still pass.
+A good balance of all the other settings should make this setting
+useless but it is available to set for more special cases.
-**Default**: disabled
+**NOTE**: This will limit the number of requests during streams but
+other API calls to the cloud provider like directory listings will
+still pass.
-#### --cache-writes ####
+- Config: rps
+- Env Var: RCLONE_CACHE_RPS
+- Type: int
+- Default: -1
-If you need to read files immediately after you upload them through `cache`
-you can enable this flag to have their data stored in the cache store at the
-same time during upload.
+#### --cache-writes
-**Default**: not set
+Cache file data on writes through the FS
-#### --cache-tmp-upload-path=PATH ####
+If you need to read files immediately after you upload them through
+cache you can enable this flag to have their data stored in the
+cache store at the same time during upload.
-This is the path where `cache` will use as a temporary storage for new files
-that need to be uploaded to the cloud provider.
+- Config: writes
+- Env Var: RCLONE_CACHE_WRITES
+- Type: bool
+- Default: false
-Specifying a value will enable this feature. Without it, it is completely disabled
-and files will be uploaded directly to the cloud provider
+#### --cache-tmp-upload-path
-**Default**: empty
+Directory to keep temporary files until they are uploaded.
-#### --cache-tmp-wait-time=DURATION ####
+This is the path where cache will use as a temporary storage for new
+files that need to be uploaded to the cloud provider.
+
+Specifying a value will enable this feature. Without it, it is
+completely disabled and files will be uploaded directly to the cloud
+provider
+
+- Config: tmp_upload_path
+- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
+- Type: string
+- Default: ""
+
+#### --cache-tmp-wait-time
+
+How long should files be stored in local cache before being uploaded
This is the duration that a file must wait in the temporary location
_cache-tmp-upload-path_ before it is selected for upload.
-Note that only one file is uploaded at a time and it can take longer to
-start the upload if a queue formed for this purpose.
+Note that only one file is uploaded at a time and it can take longer
+to start the upload if a queue formed for this purpose.
-**Default**: 15m
+- Config: tmp_wait_time
+- Env Var: RCLONE_CACHE_TMP_WAIT_TIME
+- Type: Duration
+- Default: 15s
-#### --cache-db-wait-time=DURATION ####
+#### --cache-db-wait-time
+
+How long to wait for the DB to be available - 0 is unlimited
Only one process can have the DB open at any one time, so rclone waits
for this duration for the DB to become available before it gives an
@@ -7771,7 +8910,12 @@ error.
If you set it to 0 then it will wait forever.
-**Default**: 1s
+- Config: db_wait_time
+- Env Var: RCLONE_CACHE_DB_WAIT_TIME
+- Type: Duration
+- Default: 1s
+
+
Crypt
----------------------------------------
@@ -8063,12 +9207,78 @@ Note that you should use the `rclone cryptcheck` command to check the
integrity of a crypted remote instead of `rclone check` which can't
check the checksums properly.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
-#### --crypt-show-mapping ####
+#### --crypt-remote
+
+Remote to encrypt/decrypt.
+Normally should contain a ':' and a path, eg "myremote:path/to/dir",
+"myremote:bucket" or maybe "myremote:" (not recommended).
+
+- Config: remote
+- Env Var: RCLONE_CRYPT_REMOTE
+- Type: string
+- Default: ""
+
+#### --crypt-filename-encryption
+
+How to encrypt the filenames.
+
+- Config: filename_encryption
+- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
+- Type: string
+- Default: "standard"
+- Examples:
+ - "off"
+ - Don't encrypt the file names. Adds a ".bin" extension only.
+ - "standard"
+ - Encrypt the filenames see the docs for the details.
+ - "obfuscate"
+ - Very simple filename obfuscation.
+
+#### --crypt-directory-name-encryption
+
+Option to either encrypt directory names or leave them intact.
+
+- Config: directory_name_encryption
+- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
+- Type: bool
+- Default: true
+- Examples:
+ - "true"
+ - Encrypt directory names.
+ - "false"
+ - Don't encrypt directory names, leave them intact.
+
+#### --crypt-password
+
+Password or pass phrase for encryption.
+
+- Config: password
+- Env Var: RCLONE_CRYPT_PASSWORD
+- Type: string
+- Default: ""
+
+#### --crypt-password2
+
+Password or pass phrase for salt. Optional but recommended.
+Should be different to the previous password.
+
+- Config: password2
+- Env Var: RCLONE_CRYPT_PASSWORD2
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
+
+#### --crypt-show-mapping
+
+For all files listed show how the names encrypt.
If this flag is set then for each file that the remote is asked to
list, it will log (at level INFO) a line stating the decrypted file
@@ -8078,6 +9288,13 @@ This is so you can work out which encrypted names are which decrypted
names just in case you need to do something with the encrypted file
names, or for debugging purposes.
+- Config: show_mapping
+- Env Var: RCLONE_CRYPT_SHOW_MAPPING
+- Type: bool
+- Default: false
+
+
+
## Backing up a crypted remote ##
If you wish to backup a crypted remote, it it recommended that you use
@@ -8323,21 +9540,53 @@ Dropbox supports [its own hash
type](https://www.dropbox.com/developers/reference/content-hash) which
is checked for all transfers.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to dropbox (Dropbox).
-#### --dropbox-chunk-size=SIZE ####
+#### --dropbox-client-id
-Any files larger than this will be uploaded in chunks of this
-size. The default is 48MB. The maximum is 150MB.
+Dropbox App Client Id
+Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_DROPBOX_CLIENT_ID
+- Type: string
+- Default: ""
+
+#### --dropbox-client-secret
+
+Dropbox App Client Secret
+Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_DROPBOX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to dropbox (Dropbox).
+
+#### --dropbox-chunk-size
+
+Upload chunk size. (< 150M).
+
+Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed
slightly (at most 10% for 128MB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.
+- Config: chunk_size
+- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 48M
+
+
+
### Limitations ###
Note that Dropbox is case insensitive so you can't have a file called
@@ -8469,6 +9718,52 @@ will be time of upload.
FTP does not support any checksums.
+
+### Standard Options
+
+Here are the standard options specific to ftp (FTP Connection).
+
+#### --ftp-host
+
+FTP host to connect to
+
+- Config: host
+- Env Var: RCLONE_FTP_HOST
+- Type: string
+- Default: ""
+- Examples:
+ - "ftp.example.com"
+ - Connect to ftp.example.com
+
+#### --ftp-user
+
+FTP username, leave blank for current username, ncw
+
+- Config: user
+- Env Var: RCLONE_FTP_USER
+- Type: string
+- Default: ""
+
+#### --ftp-port
+
+FTP port, leave blank to use default (21)
+
+- Config: port
+- Env Var: RCLONE_FTP_PORT
+- Type: string
+- Default: ""
+
+#### --ftp-pass
+
+FTP password
+
+- Config: pass
+- Env Var: RCLONE_FTP_PASS
+- Type: string
+- Default: ""
+
+
+
### Limitations ###
Note that since FTP isn't HTTP based the following flags don't work
@@ -8705,6 +10000,167 @@ Google google cloud storage stores md5sums natively and rclone stores
modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns.
+
+### Standard Options
+
+Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
+
+#### --gcs-client-id
+
+Google Application Client Id
+Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_GCS_CLIENT_ID
+- Type: string
+- Default: ""
+
+#### --gcs-client-secret
+
+Google Application Client Secret
+Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_GCS_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+#### --gcs-project-number
+
+Project number.
+Optional - needed only for list/create/delete buckets - see your developer console.
+
+- Config: project_number
+- Env Var: RCLONE_GCS_PROJECT_NUMBER
+- Type: string
+- Default: ""
+
+#### --gcs-service-account-file
+
+Service Account Credentials JSON file path
+Leave blank normally.
+Needed only if you want use SA instead of interactive login.
+
+- Config: service_account_file
+- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
+- Type: string
+- Default: ""
+
+#### --gcs-service-account-credentials
+
+Service Account Credentials JSON blob
+Leave blank normally.
+Needed only if you want use SA instead of interactive login.
+
+- Config: service_account_credentials
+- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
+- Type: string
+- Default: ""
+
+#### --gcs-object-acl
+
+Access Control List for new objects.
+
+- Config: object_acl
+- Env Var: RCLONE_GCS_OBJECT_ACL
+- Type: string
+- Default: ""
+- Examples:
+ - "authenticatedRead"
+ - Object owner gets OWNER access, and all Authenticated Users get READER access.
+ - "bucketOwnerFullControl"
+ - Object owner gets OWNER access, and project team owners get OWNER access.
+ - "bucketOwnerRead"
+ - Object owner gets OWNER access, and project team owners get READER access.
+ - "private"
+ - Object owner gets OWNER access [default if left blank].
+ - "projectPrivate"
+ - Object owner gets OWNER access, and project team members get access according to their roles.
+ - "publicRead"
+ - Object owner gets OWNER access, and all Users get READER access.
+
+#### --gcs-bucket-acl
+
+Access Control List for new buckets.
+
+- Config: bucket_acl
+- Env Var: RCLONE_GCS_BUCKET_ACL
+- Type: string
+- Default: ""
+- Examples:
+ - "authenticatedRead"
+ - Project team owners get OWNER access, and all Authenticated Users get READER access.
+ - "private"
+ - Project team owners get OWNER access [default if left blank].
+ - "projectPrivate"
+ - Project team members get access according to their roles.
+ - "publicRead"
+ - Project team owners get OWNER access, and all Users get READER access.
+ - "publicReadWrite"
+ - Project team owners get OWNER access, and all Users get WRITER access.
+
+#### --gcs-location
+
+Location for the newly created buckets.
+
+- Config: location
+- Env Var: RCLONE_GCS_LOCATION
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Empty for default location (US).
+ - "asia"
+ - Multi-regional location for Asia.
+ - "eu"
+ - Multi-regional location for Europe.
+ - "us"
+ - Multi-regional location for United States.
+ - "asia-east1"
+ - Taiwan.
+ - "asia-northeast1"
+ - Tokyo.
+ - "asia-southeast1"
+ - Singapore.
+ - "australia-southeast1"
+ - Sydney.
+ - "europe-west1"
+ - Belgium.
+ - "europe-west2"
+ - London.
+ - "us-central1"
+ - Iowa.
+ - "us-east1"
+ - South Carolina.
+ - "us-east4"
+ - Northern Virginia.
+ - "us-west1"
+ - Oregon.
+
+#### --gcs-storage-class
+
+The storage class to use when storing objects in Google Cloud Storage.
+
+- Config: storage_class
+- Env Var: RCLONE_GCS_STORAGE_CLASS
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Default
+ - "MULTI_REGIONAL"
+ - Multi-regional storage class
+ - "REGIONAL"
+ - Regional storage class
+ - "NEARLINE"
+ - Nearline storage class
+ - "COLDLINE"
+ - Coldline storage class
+ - "DURABLE_REDUCED_AVAILABILITY"
+ - Durable reduced availability storage class
+
+
+
Google Drive
-----------------------------------------
@@ -9089,64 +10545,74 @@ Drive, the size of all files in the Trash and the space used by other
Google services such as Gmail. This command does not take any path
arguments.
-### Specific options ###
+#### Import/Export of google documents ####
-Here are the command line options specific to this cloud storage
-system.
+Google documents can be exported from and uploaded to Google Drive.
-#### --drive-acknowledge-abuse ####
-
-If downloading a file returns the error `This file has been identified
-as malware or spam and cannot be downloaded` with the error code
-`cannotDownloadAbusiveFile` then supply this flag to rclone to
-indicate you acknowledge the risks of downloading the file and rclone
-will download it anyway.
-
-#### --drive-auth-owner-only ####
-
-Only consider files owned by the authenticated user.
-
-#### --drive-chunk-size=SIZE ####
-
-Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
-
-Making this larger will improve performance, but note that each chunk
-is buffered in memory one per transfer.
-
-Reducing this will reduce memory usage but decrease performance.
-
-#### --drive-formats ####
-
-Google documents can only be exported from Google drive. When rclone
-downloads a Google doc it chooses a format to download depending upon
-this setting.
-
-By default the formats are `docx,xlsx,pptx,svg` which are a sensible
-default for an editable document.
+When rclone downloads a Google doc it chooses a format to download
+depending upon the `--drive-export-formats` setting.
+By default the export formats are `docx,xlsx,pptx,svg` which are a
+sensible default for an editable document.
When choosing a format, rclone runs down the list provided in order
and chooses the first file format the doc can be exported as from the
list. If the file can't be exported to a format on the formats list,
then rclone will choose a format from the default list.
-If you prefer an archive copy then you might use `--drive-formats
+If you prefer an archive copy then you might use `--drive-export-formats
pdf`, or if you prefer openoffice/libreoffice formats you might use
-`--drive-formats ods,odt,odp`.
+`--drive-export-formats ods,odt,odp`.
Note that rclone adds the extension to the google doc, so if it is
calles `My Spreadsheet` on google docs, it will be exported as `My
Spreadsheet.xlsx` or `My Spreadsheet.pdf` etc.
-Here are the possible extensions with their corresponding mime types.
+When importing files into Google Drive, rclone will conververt all
+files with an extension in `--drive-import-formats` to their
+associated document type.
+rclone will not convert any files by default, since the conversion
+is lossy process.
+
+The conversion must result in a file with the same extension when
+the `--drive-export-formats` rules are applied to the uploded document.
+
+Here are some examples for allowed and prohibited conversions.
+
+| export-formats | import-formats | Upload Ext | Document Ext | Allowed |
+| -------------- | -------------- | ---------- | ------------ | ------- |
+| odt | odt | odt | odt | Yes |
+| odt | docx,odt | odt | odt | Yes |
+| | docx | docx | docx | Yes |
+| | odt | odt | docx | No |
+| odt,docx | docx,odt | docx | odt | No |
+| docx,odt | docx,odt | docx | docx | Yes |
+| docx,odt | docx,odt | odt | docx | No |
+
+This limitation can be disabled by specifying `--drive-allow-import-name-change`.
+When using this flag, rclone can convert multiple files types resulting
+in the same document type at once, eg with `--drive-import-formats docx,odt,txt`,
+all files having these extension would result in a doument represented as a docx file.
+This brings the additional risk of overwriting a document, if multiple files
+have the same stem. Many rclone operations will not handle this name change
+in any way. They assume an equal name when copying files and might copy the
+file again or delete them when the name changes.
+
+Here are the possible export extensions with their corresponding mime types.
+Most of these can also be used for importing, but there more that are not
+listed here. Some of these additional ones might only be available when
+the operating system provides the correct MIME type entries.
+
+This list can be changed by Google Drive at any time and might not
+represent the currently available converions.
| Extension | Mime Type | Description |
| --------- |-----------| ------------|
| csv | text/csv | Standard CSV format for Spreadsheets |
-| doc | application/msword | Micosoft Office Document |
| docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document |
| epub | application/epub+zip | E-book format |
| html | text/html | An HTML Document |
| jpg | image/jpeg | A JPEG Image File |
+| json | application/vnd.google-apps.script+json | JSON Text Format |
| odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation |
| ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
| ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
@@ -9158,11 +10624,255 @@ Here are the possible extensions with their corresponding mime types.
| svg | image/svg+xml | Scalable Vector Graphics Format |
| tsv | text/tab-separated-values | Standard TSV format for spreadsheets |
| txt | text/plain | Plain Text |
-| xls | application/vnd.ms-excel | Microsoft Office Spreadsheet |
| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet |
| zip | application/zip | A ZIP file of HTML, Images CSS |
-#### --drive-alternate-export ####
+Google douments can also be exported as link files. These files will
+open a browser window for the Google Docs website of that dument
+when opened. The link file extension has to be specified as a
+`--drive-export-formats` parameter. They will match all available
+Google Documents.
+
+| Extension | Description | OS Support |
+| --------- | ----------- | ---------- |
+| desktop | freedesktop.org specified desktop entry | Linux |
+| link.html | An HTML Document with a redirect | All |
+| url | INI style link file | macOS, Windows |
+| webloc | macOS specific XML format | macOS |
+
+
+### Standard Options
+
+Here are the standard options specific to drive (Google Drive).
+
+#### --drive-client-id
+
+Google Application Client Id
+Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_DRIVE_CLIENT_ID
+- Type: string
+- Default: ""
+
+#### --drive-client-secret
+
+Google Application Client Secret
+Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_DRIVE_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+#### --drive-scope
+
+Scope that rclone should use when requesting access from drive.
+
+- Config: scope
+- Env Var: RCLONE_DRIVE_SCOPE
+- Type: string
+- Default: ""
+- Examples:
+ - "drive"
+ - Full access all files, excluding Application Data Folder.
+ - "drive.readonly"
+ - Read-only access to file metadata and file contents.
+ - "drive.file"
+ - Access to files created by rclone only.
+ - These are visible in the drive website.
+ - File authorization is revoked when the user deauthorizes the app.
+ - "drive.appfolder"
+ - Allows read and write access to the Application Data folder.
+ - This is not visible in the drive website.
+ - "drive.metadata.readonly"
+ - Allows read-only access to file metadata but
+ - does not allow any access to read or download file content.
+
+#### --drive-root-folder-id
+
+ID of the root folder
+Leave blank normally.
+Fill in to access "Computers" folders. (see docs).
+
+- Config: root_folder_id
+- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
+- Type: string
+- Default: ""
+
+#### --drive-service-account-file
+
+Service Account Credentials JSON file path
+Leave blank normally.
+Needed only if you want use SA instead of interactive login.
+
+- Config: service_account_file
+- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to drive (Google Drive).
+
+#### --drive-service-account-credentials
+
+Service Account Credentials JSON blob
+Leave blank normally.
+Needed only if you want use SA instead of interactive login.
+
+- Config: service_account_credentials
+- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
+- Type: string
+- Default: ""
+
+#### --drive-team-drive
+
+ID of the Team Drive
+
+- Config: team_drive
+- Env Var: RCLONE_DRIVE_TEAM_DRIVE
+- Type: string
+- Default: ""
+
+#### --drive-auth-owner-only
+
+Only consider files owned by the authenticated user.
+
+- Config: auth_owner_only
+- Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
+- Type: bool
+- Default: false
+
+#### --drive-use-trash
+
+Send files to the trash instead of deleting permanently.
+Defaults to true, namely sending files to the trash.
+Use `--drive-use-trash=false` to delete files permanently instead.
+
+- Config: use_trash
+- Env Var: RCLONE_DRIVE_USE_TRASH
+- Type: bool
+- Default: true
+
+#### --drive-skip-gdocs
+
+Skip google documents in all listings.
+If given, gdocs practically become invisible to rclone.
+
+- Config: skip_gdocs
+- Env Var: RCLONE_DRIVE_SKIP_GDOCS
+- Type: bool
+- Default: false
+
+#### --drive-shared-with-me
+
+Only show files that are shared with me.
+
+Instructs rclone to operate on your "Shared with me" folder (where
+Google Drive lets you access the files and folders others have shared
+with you).
+
+This works both with the "list" (lsd, lsl, etc) and the "copy"
+commands (copy, sync, etc), and with all other commands too.
+
+- Config: shared_with_me
+- Env Var: RCLONE_DRIVE_SHARED_WITH_ME
+- Type: bool
+- Default: false
+
+#### --drive-trashed-only
+
+Only show files that are in the trash.
+This will show trashed files in their original directory structure.
+
+- Config: trashed_only
+- Env Var: RCLONE_DRIVE_TRASHED_ONLY
+- Type: bool
+- Default: false
+
+#### --drive-formats
+
+Deprecated: see export_formats
+
+- Config: formats
+- Env Var: RCLONE_DRIVE_FORMATS
+- Type: string
+- Default: ""
+
+#### --drive-export-formats
+
+Comma separated list of preferred formats for downloading Google docs.
+
+- Config: export_formats
+- Env Var: RCLONE_DRIVE_EXPORT_FORMATS
+- Type: string
+- Default: "docx,xlsx,pptx,svg"
+
+#### --drive-import-formats
+
+Comma separated list of preferred formats for uploading Google docs.
+
+- Config: import_formats
+- Env Var: RCLONE_DRIVE_IMPORT_FORMATS
+- Type: string
+- Default: ""
+
+#### --drive-allow-import-name-change
+
+Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+
+- Config: allow_import_name_change
+- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
+- Type: bool
+- Default: false
+
+#### --drive-use-created-date
+
+Use file created date instead of modified date.,
+
+Useful when downloading data and you want the creation date used in
+place of the last modified date.
+
+**WARNING**: This flag may have some unexpected consequences.
+
+When uploading to your drive all files will be overwritten unless they
+haven't been modified since their creation. And the inverse will occur
+while downloading. This side effect can be avoided by using the
+"--checksum" flag.
+
+This feature was implemented to retain photos capture date as recorded
+by google photos. You will first need to check the "Create a Google
+Photos folder" option in your google drive settings. You can then copy
+or move the photos locally and use the date the image was taken
+(created) set as the modification date.
+
+- Config: use_created_date
+- Env Var: RCLONE_DRIVE_USE_CREATED_DATE
+- Type: bool
+- Default: false
+
+#### --drive-list-chunk
+
+Size of listing chunk 100-1000. 0 to disable.
+
+- Config: list_chunk
+- Env Var: RCLONE_DRIVE_LIST_CHUNK
+- Type: int
+- Default: 1000
+
+#### --drive-impersonate
+
+Impersonate this user when using a service account.
+
+- Config: impersonate
+- Env Var: RCLONE_DRIVE_IMPERSONATE
+- Type: string
+- Default: ""
+
+#### --drive-alternate-export
+
+Use alternate export URLs for google documents export.,
If this option is set this instructs rclone to use an alternate set of
export URLs for drive documents. Users have reported that the
@@ -9173,66 +10883,68 @@ See rclone issue [#2243](https://github.com/ncw/rclone/issues/2243) for backgrou
[this google drive issue](https://issuetracker.google.com/issues/36761333) and
[this helpful post](https://www.labnol.org/internet/direct-links-for-google-drive/28356/).
-#### --drive-impersonate user ####
+- Config: alternate_export
+- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
+- Type: bool
+- Default: false
-When using a service account, this instructs rclone to impersonate the user passed in.
+#### --drive-upload-cutoff
-#### --drive-keep-revision-forever ####
+Cutoff for switching to chunked upload
-Keeps new head revision of the file forever.
+- Config: upload_cutoff
+- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 8M
-#### --drive-list-chunk int ####
+#### --drive-chunk-size
-Size of listing chunk 100-1000. 0 to disable. (default 1000)
+Upload chunk size. Must a power of 2 >= 256k.
-#### --drive-shared-with-me ####
+Making this larger will improve performance, but note that each chunk
+is buffered in memory one per transfer.
-Instructs rclone to operate on your "Shared with me" folder (where
-Google Drive lets you access the files and folders others have shared
-with you).
+Reducing this will reduce memory usage but decrease performance.
-This works both with the "list" (lsd, lsl, etc) and the "copy"
-commands (copy, sync, etc), and with all other commands too.
+- Config: chunk_size
+- Env Var: RCLONE_DRIVE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 8M
-#### --drive-skip-gdocs ####
+#### --drive-acknowledge-abuse
-Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
+Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
-#### --drive-trashed-only ####
+If downloading a file returns the error "This file has been identified
+as malware or spam and cannot be downloaded" with the error code
+"cannotDownloadAbusiveFile" then supply this flag to rclone to
+indicate you acknowledge the risks of downloading the file and rclone
+will download it anyway.
-Only show files that are in the trash. This will show trashed files
-in their original directory structure.
+- Config: acknowledge_abuse
+- Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
+- Type: bool
+- Default: false
-#### --drive-upload-cutoff=SIZE ####
+#### --drive-keep-revision-forever
-File size cutoff for switching to chunked upload. Default is 8 MB.
+Keep new head revision of each file forever.
-#### --drive-use-trash ####
+- Config: keep_revision_forever
+- Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
+- Type: bool
+- Default: false
-Controls whether files are sent to the trash or deleted
-permanently. Defaults to true, namely sending files to the trash. Use
-`--drive-use-trash=false` to delete files permanently instead.
+#### --drive-v2-download-min-size
-#### --drive-use-created-date ####
+If Object's are greater, use drive v2 API to download.
-Use the file creation date in place of the modification date. Defaults
-to false.
+- Config: v2_download_min_size
+- Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
+- Type: SizeSuffix
+- Default: off
-Useful when downloading data and you want the creation date used in
-place of the last modified date.
-
-**WARNING**: This flag may have some unexpected consequences.
-
-When uploading to your drive all files will be overwritten unless they
-haven't been modified since their creation. And the inverse will occur
-while downloading. This side effect can be avoided by using the
-`--checksum` flag.
-
-This feature was implemented to retain photos capture date as recorded
-by google photos. You will first need to check the "Create a Google
-Photos folder" option in your google drive settings. You can then copy
-or move the photos locally and use the date the image was taken
-(created) set as the modification date.
+
### Limitations ###
@@ -9441,6 +11153,25 @@ without a config file:
rclone lsd --http-url https://beta.rclone.org :http:
+
+### Standard Options
+
+Here are the standard options specific to http (http Connection).
+
+#### --http-url
+
+URL of http host to connect to
+
+- Config: url
+- Env Var: RCLONE_HTTP_URL
+- Type: string
+- Default: ""
+- Examples:
+ - "https://example.com"
+ - Connect to example.com
+
+
+
Hubic
-----------------------------------------
@@ -9565,6 +11296,49 @@ amongst others) for storing the modification time for an object.
Note that Hubic wraps the Swift backend, so most of the properties of
are the same.
+
+### Standard Options
+
+Here are the standard options specific to hubic (Hubic).
+
+#### --hubic-client-id
+
+Hubic Client Id
+Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_HUBIC_CLIENT_ID
+- Type: string
+- Default: ""
+
+#### --hubic-client-secret
+
+Hubic Client Secret
+Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_HUBIC_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to hubic (Hubic).
+
+#### --hubic-chunk-size
+
+Above this size files will be chunked into a _segments container.
+
+Above this size files will be chunked into a _segments container. The
+default for this is 5GB which is its maximum value.
+
+- Config: chunk_size
+- Env Var: RCLONE_HUBIC_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5G
+
+
+
### Limitations ###
This uses the normal OpenStack Swift mechanism to refresh the Swift
@@ -9652,6 +11426,15 @@ To copy a local directory to an Jottacloud directory called backup
rclone copy /home/source remote:backup
+### --fast-list ###
+
+This remote supports `--fast-list` which allows you to use fewer
+transactions in exchange for more memory. See the [rclone
+docs](/docs/#fast-list) for more details.
+
+Note that the implementation in Jottacloud always uses only a single
+API request to get the entire list, so for large folders this could
+lead to long wait time before the first results are shown.
### Modified time and hashes ###
@@ -9670,12 +11453,93 @@ the `--jottacloud-md5-memory-limit` flag.
### Deleting files ###
-Any files you delete with rclone will end up in the trash. Due to a lack of API documentation emptying the trash is currently only possible via the Jottacloud website.
+By default rclone will send all files to the trash when deleting files.
+Due to a lack of API documentation emptying the trash is currently
+only possible via the Jottacloud website. If deleting permanently
+is required then use the `--jottacloud-hard-delete` flag,
+or set the equivalent environment variable.
### Versions ###
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
+### Quota information ###
+
+To view your current quota you can use the `rclone about remote:`
+command which will display your usage limit (unless it is unlimited)
+and the current usage.
+
+
+### Standard Options
+
+Here are the standard options specific to jottacloud (JottaCloud).
+
+#### --jottacloud-user
+
+User Name
+
+- Config: user
+- Env Var: RCLONE_JOTTACLOUD_USER
+- Type: string
+- Default: ""
+
+#### --jottacloud-pass
+
+Password.
+
+- Config: pass
+- Env Var: RCLONE_JOTTACLOUD_PASS
+- Type: string
+- Default: ""
+
+#### --jottacloud-mountpoint
+
+The mountpoint to use.
+
+- Config: mountpoint
+- Env Var: RCLONE_JOTTACLOUD_MOUNTPOINT
+- Type: string
+- Default: ""
+- Examples:
+ - "Sync"
+ - Will be synced by the official client.
+ - "Archive"
+ - Archive
+
+### Advanced Options
+
+Here are the advanced options specific to jottacloud (JottaCloud).
+
+#### --jottacloud-md5-memory-limit
+
+Files bigger than this will be cached on disk to calculate the MD5 if required.
+
+- Config: md5_memory_limit
+- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
+- Type: SizeSuffix
+- Default: 10M
+
+#### --jottacloud-hard-delete
+
+Delete files permanently rather than putting them into the trash.
+
+- Config: hard_delete
+- Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
+- Type: bool
+- Default: false
+
+#### --jottacloud-unlink
+
+Remove existing public link to file/folder with link command rather than creating.
+Default is false, meaning link command will create or retrieve public link.
+
+- Config: unlink
+- Env Var: RCLONE_JOTTACLOUD_UNLINK
+- Type: bool
+- Default: false
+
+
+
### Limitations ###
Note that Jottacloud is case insensitive so you can't have a file called
@@ -9685,16 +11549,6 @@ There are quite a few characters that can't be in Jottacloud file names. Rclone
Jottacloud only supports filenames up to 255 characters in length.
-### Specific options ###
-
-Here are the command line options specific to this cloud storage
-system.
-
-#### --jottacloud-md5-memory-limit SizeSuffix
-
-Files bigger than this will be cached on disk to calculate the MD5 if
-required. (default 10M)
-
### Troubleshooting ###
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.
@@ -9791,22 +11645,59 @@ messages in the log about duplicates.
Use `rclone dedupe` to fix duplicated files.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to mega (Mega).
-#### --mega-debug ####
+#### --mega-user
-If this flag is set (along with `-vv`) it will print further debugging
+User name
+
+- Config: user
+- Env Var: RCLONE_MEGA_USER
+- Type: string
+- Default: ""
+
+#### --mega-pass
+
+Password.
+
+- Config: pass
+- Env Var: RCLONE_MEGA_PASS
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to mega (Mega).
+
+#### --mega-debug
+
+Output more debug from Mega.
+
+If this flag is set (along with -vv) it will print further debugging
information from the mega backend.
-#### --mega-hard-delete ####
+- Config: debug
+- Env Var: RCLONE_MEGA_DEBUG
+- Type: bool
+- Default: false
+
+#### --mega-hard-delete
+
+Delete files permanently rather than putting them into the trash.
Normally the mega backend will put all deletions into the trash rather
-than permanently deleting them. If you specify this flag (or set it
-in the advanced config) then rclone will permanently delete objects
-instead.
+than permanently deleting them. If you specify this then rclone will
+permanently delete objects instead.
+
+- Config: hard_delete
+- Env Var: RCLONE_MEGA_HARD_DELETE
+- Type: bool
+- Default: false
+
+
### Limitations ###
@@ -9983,32 +11874,112 @@ upload which means that there is a limit of 9.5TB of multipart uploads
in progress as Azure won't allow more than that amount of uncommitted
blocks.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
-#### --azureblob-upload-cutoff=SIZE ####
+#### --azureblob-account
-Cutoff for switching to chunked upload - must be <= 256MB. The default
-is 256MB.
+Storage Account Name (leave blank to use connection string or SAS URL)
-#### --azureblob-chunk-size=SIZE ####
+- Config: account
+- Env Var: RCLONE_AZUREBLOB_ACCOUNT
+- Type: string
+- Default: ""
-Upload chunk size. Default 4MB. Note that this is stored in memory
-and there may be up to `--transfers` chunks stored at once in memory.
-This can be at most 100MB.
+#### --azureblob-key
-#### --azureblob-access-tier=Hot/Cool/Archive ####
+Storage Account Key (leave blank to use connection string or SAS URL)
-Azure storage supports blob tiering, you can configure tier in advanced
-settings or supply flag while performing data transfer operations.
-If there is no `access tier` specified, rclone doesn't apply any tier.
-rclone performs `Set Tier` operation on blobs while uploading, if objects
-are not modified, specifying `access tier` to new one will have no effect.
-If blobs are in `archive tier` at remote, trying to perform data transfer
+- Config: key
+- Env Var: RCLONE_AZUREBLOB_KEY
+- Type: string
+- Default: ""
+
+#### --azureblob-sas-url
+
+SAS URL for container level access only
+(leave blank if using account/key or connection string)
+
+- Config: sas_url
+- Env Var: RCLONE_AZUREBLOB_SAS_URL
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
+
+#### --azureblob-endpoint
+
+Endpoint for the service
+Leave blank normally.
+
+- Config: endpoint
+- Env Var: RCLONE_AZUREBLOB_ENDPOINT
+- Type: string
+- Default: ""
+
+#### --azureblob-upload-cutoff
+
+Cutoff for switching to chunked upload (<= 256MB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 256M
+
+#### --azureblob-chunk-size
+
+Upload chunk size (<= 100MB).
+
+Note that this is stored in memory and there may be up to
+"--transfers" chunks stored at once in memory.
+
+- Config: chunk_size
+- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 4M
+
+#### --azureblob-list-chunk
+
+Size of blob list.
+
+This sets the number of blobs requested in each listing chunk. Default
+is the maximum, 5000. "List blobs" requests are permitted 2 minutes
+per megabyte to complete. If an operation is taking longer than 2
+minutes per megabyte on average, it will time out (
+[source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval)
+). This can be used to limit the number of blobs items to return, to
+avoid the time out.
+
+- Config: list_chunk
+- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
+- Type: int
+- Default: 5000
+
+#### --azureblob-access-tier
+
+Access tier of blob: hot, cool or archive.
+
+Archived blobs can be restored by setting access tier to hot or
+cool. Leave blank if you intend to use default access tier, which is
+set at account level
+
+If there is no "access tier" specified, rclone doesn't apply any tier.
+rclone performs "Set Tier" operation on blobs while uploading, if objects
+are not modified, specifying "access tier" to new one will have no effect.
+If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
-tiering blob to `Hot` or `Cool`.
+tiering blob to "Hot" or "Cool".
+
+- Config: access_tier
+- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
+- Type: string
+- Default: ""
+
+
### Limitations ###
@@ -10033,51 +12004,36 @@ Here is an example of how to make a remote called `remote`. First run:
This will guide you through an interactive setup process:
```
-No remotes found - make a new one
+e) Edit existing remote
n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
s) Set configuration password
-n/s> n
+q) Quit config
+e/n/d/r/c/s/q> n
name> remote
Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
-10 / Microsoft OneDrive
+...
+17 / Microsoft OneDrive
\ "onedrive"
-11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-12 / SSH/SFTP Connection
- \ "sftp"
-13 / Yandex Disk
- \ "yandex"
-Storage> 10
-Microsoft App Client Id - leave blank normally.
+...
+Storage> 17
+Microsoft App Client Id
+Leave blank normally.
+Enter a string value. Press Enter for the default ("").
client_id>
-Microsoft App Client Secret - leave blank normally.
+Microsoft App Client Secret
+Leave blank normally.
+Enter a string value. Press Enter for the default ("").
client_secret>
+Edit advanced config? (y/n)
+y) Yes
+n) No
+y/n> n
Remote config
-Choose OneDrive account type?
- * Say b for a OneDrive business account
- * Say p for a personal OneDrive account
-b) Business
-p) Personal
-b/p> p
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
@@ -10088,11 +12044,32 @@ If your browser doesn't open automatically go to the following link: http://127.
Log in and authorize rclone for access
Waiting for code...
Got code
+Choose a number from below, or type in an existing value
+ 1 / OneDrive Personal or Business
+ \ "onedrive"
+ 2 / Sharepoint site
+ \ "sharepoint"
+ 3 / Type in driveID
+ \ "driveid"
+ 4 / Type in SiteID
+ \ "siteid"
+ 5 / Search a Sharepoint site
+ \ "search"
+Your choice> 1
+Found 1 drives, please select the one you want to use:
+0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
+Chose drive to use:> 0
+Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
+Is that okay?
+y) Yes
+n) No
+y/n> y
--------------------
[remote]
-client_id =
-client_secret =
-token = {"access_token":"XXXXXX"}
+type = onedrive
+token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
+drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
+drive_type = business
--------------------
y) Yes this is OK
e) Edit this remote
@@ -10123,20 +12100,23 @@ To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
-### OneDrive for Business ###
+### Getting your own Client ID and Key ###
-There is additional support for OneDrive for Business.
-Select "b" when ask
-```
-Choose OneDrive account type?
- * Say b for a OneDrive business account
- * Say p for a personal OneDrive account
-b) Business
-p) Personal
-b/p>
-```
-After that rclone requires an authentication of your account. The application will first authenticate your account, then query the OneDrive resource URL
-and do a second (silent) authentication for this resource URL.
+rclone uses a pair of Client ID and Key shared by all rclone users when performing requests by default.
+If you are having problems with them (E.g., seeing a lot of throttling), you can get your own
+Client ID and Key by following the steps below:
+
+1. Open https://apps.dev.microsoft.com/#/appList, then click `Add an app` (Choose `Converged applications` if applicable)
+2. Enter a name for your app, and click continue. Copy and keep the `Application Id` under the app name for later use.
+3. Under section `Application Secrets`, click `Generate New Password`. Copy and keep that password for later use.
+4. Under section `Platforms`, click `Add platform`, then `Web`. Enter `http://localhost:53682/` in
+`Redirect URLs`.
+5. Under section `Microsoft Graph Permissions`, `Add` these `delegated permissions`:
+`Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read`.
+6. Scroll to the bottom and click `Save`.
+
+Now the application is complete. Run `rclone config` to create or edit a OneDrive remote.
+Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.
### Modified time and hashes ###
@@ -10157,15 +12137,81 @@ doesn't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Microsoft's apps or via
the OneDrive website.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to onedrive (Microsoft OneDrive).
-#### --onedrive-chunk-size=SIZE ####
+#### --onedrive-client-id
-Above this size files will be chunked - must be multiple of 320k. The
-default is 10MB. Note that the chunks will be buffered into memory.
+Microsoft App Client Id
+Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_ONEDRIVE_CLIENT_ID
+- Type: string
+- Default: ""
+
+#### --onedrive-client-secret
+
+Microsoft App Client Secret
+Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+### Advanced Options
+
+Here are the advanced options specific to onedrive (Microsoft OneDrive).
+
+#### --onedrive-chunk-size
+
+Chunk size to upload files with - must be multiple of 320k.
+
+Above this size files will be chunked - must be multiple of 320k. Note
+that the chunks will be buffered into memory.
+
+- Config: chunk_size
+- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 10M
+
+#### --onedrive-drive-id
+
+The ID of the drive to use
+
+- Config: drive_id
+- Env Var: RCLONE_ONEDRIVE_DRIVE_ID
+- Type: string
+- Default: ""
+
+#### --onedrive-drive-type
+
+The type of the drive ( personal | business | documentLibrary )
+
+- Config: drive_type
+- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
+- Type: string
+- Default: ""
+
+#### --onedrive-expose-onenote-files
+
+Set to make OneNote files show up in directory listings.
+
+By default rclone will hide OneNote files in directory listings because
+operations like "Open" and "Update" won't work on them. But this
+behaviour may also prevent you from deleting them. If you want to
+delete OneNote files or otherwise want them to show up in directory
+listing, set this option.
+
+- Config: expose_onenote_files
+- Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
+- Type: bool
+- Default: false
+
+
### Limitations ###
@@ -10306,13 +12352,30 @@ OpenDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
-### Deleting files ###
+
+### Standard Options
-Any files you delete with rclone will end up in the trash. Amazon
-don't provide an API to permanently delete files, nor to empty the
-trash, so you will have to do that with one of Amazon's apps or via
-the OpenDrive website. As of November 17, 2016, files are
-automatically deleted by Amazon from the trash after 30 days.
+Here are the standard options specific to opendrive (OpenDrive).
+
+#### --opendrive-username
+
+Username
+
+- Config: username
+- Env Var: RCLONE_OPENDRIVE_USERNAME
+- Type: string
+- Default: ""
+
+#### --opendrive-password
+
+Password.
+
+- Config: password
+- Env Var: RCLONE_OPENDRIVE_PASSWORD
+- Type: string
+- Default: ""
+
+
### Limitations ###
@@ -10473,6 +12536,90 @@ credentials. In order of precedence:
- Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY`
- Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY`
+
+### Standard Options
+
+Here are the standard options specific to qingstor (QingCloud Object Storage).
+
+#### --qingstor-env-auth
+
+Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+
+- Config: env_auth
+- Env Var: RCLONE_QINGSTOR_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter QingStor credentials in the next step
+ - "true"
+ - Get QingStor credentials from the environment (env vars or IAM)
+
+#### --qingstor-access-key-id
+
+QingStor Access Key ID
+Leave blank for anonymous access or runtime credentials.
+
+- Config: access_key_id
+- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
+- Type: string
+- Default: ""
+
+#### --qingstor-secret-access-key
+
+QingStor Secret Access Key (password)
+Leave blank for anonymous access or runtime credentials.
+
+- Config: secret_access_key
+- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
+- Type: string
+- Default: ""
+
+#### --qingstor-endpoint
+
+Enter a endpoint URL to connection QingStor API.
+Leave blank will use the default value "https://qingstor.com:443"
+
+- Config: endpoint
+- Env Var: RCLONE_QINGSTOR_ENDPOINT
+- Type: string
+- Default: ""
+
+#### --qingstor-zone
+
+Zone to connect to.
+Default is "pek3a".
+
+- Config: zone
+- Env Var: RCLONE_QINGSTOR_ZONE
+- Type: string
+- Default: ""
+- Examples:
+ - "pek3a"
+ - The Beijing (China) Three Zone
+ - Needs location constraint pek3a.
+ - "sh1a"
+ - The Shanghai (China) First Zone
+ - Needs location constraint sh1a.
+ - "gd2a"
+ - The Guangdong (China) Second Zone
+ - Needs location constraint gd2a.
+
+### Advanced Options
+
+Here are the advanced options specific to qingstor (QingCloud Object Storage).
+
+#### --qingstor-connection-retries
+
+Number of connnection retries.
+
+- Config: connection_retries
+- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
+- Type: int
+- Default: 3
+
+
+
Swift
----------------------------------------
@@ -10730,21 +12877,201 @@ sufficient to determine if it is "dirty". By using `--update` along with
`--use-server-modtime`, you can avoid the extra API call and simply upload
files whose local modtime is newer than the time it was last uploaded.
-### Specific options ###
+
+### Standard Options
-Here are the command line options specific to this cloud storage
-system.
+Here are the standard options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
-#### --swift-storage-policy=STRING ####
-Apply the specified storage policy when creating a new container. The policy
-cannot be changed afterwards. The allowed configuration values and their
-meaning depend on your Swift storage provider.
+#### --swift-env-auth
-#### --swift-chunk-size=SIZE ####
+Get swift credentials from environment variables in standard OpenStack form.
+
+- Config: env_auth
+- Env Var: RCLONE_SWIFT_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter swift credentials in the next step
+ - "true"
+ - Get swift credentials from environment vars. Leave other fields blank if using this.
+
+#### --swift-user
+
+User name to log in (OS_USERNAME).
+
+- Config: user
+- Env Var: RCLONE_SWIFT_USER
+- Type: string
+- Default: ""
+
+#### --swift-key
+
+API key or password (OS_PASSWORD).
+
+- Config: key
+- Env Var: RCLONE_SWIFT_KEY
+- Type: string
+- Default: ""
+
+#### --swift-auth
+
+Authentication URL for server (OS_AUTH_URL).
+
+- Config: auth
+- Env Var: RCLONE_SWIFT_AUTH
+- Type: string
+- Default: ""
+- Examples:
+ - "https://auth.api.rackspacecloud.com/v1.0"
+ - Rackspace US
+ - "https://lon.auth.api.rackspacecloud.com/v1.0"
+ - Rackspace UK
+ - "https://identity.api.rackspacecloud.com/v2.0"
+ - Rackspace v2
+ - "https://auth.storage.memset.com/v1.0"
+ - Memset Memstore UK
+ - "https://auth.storage.memset.com/v2.0"
+ - Memset Memstore UK v2
+ - "https://auth.cloud.ovh.net/v2.0"
+ - OVH
+
+#### --swift-user-id
+
+User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+
+- Config: user_id
+- Env Var: RCLONE_SWIFT_USER_ID
+- Type: string
+- Default: ""
+
+#### --swift-domain
+
+User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+
+- Config: domain
+- Env Var: RCLONE_SWIFT_DOMAIN
+- Type: string
+- Default: ""
+
+#### --swift-tenant
+
+Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+
+- Config: tenant
+- Env Var: RCLONE_SWIFT_TENANT
+- Type: string
+- Default: ""
+
+#### --swift-tenant-id
+
+Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+
+- Config: tenant_id
+- Env Var: RCLONE_SWIFT_TENANT_ID
+- Type: string
+- Default: ""
+
+#### --swift-tenant-domain
+
+Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+
+- Config: tenant_domain
+- Env Var: RCLONE_SWIFT_TENANT_DOMAIN
+- Type: string
+- Default: ""
+
+#### --swift-region
+
+Region name - optional (OS_REGION_NAME)
+
+- Config: region
+- Env Var: RCLONE_SWIFT_REGION
+- Type: string
+- Default: ""
+
+#### --swift-storage-url
+
+Storage URL - optional (OS_STORAGE_URL)
+
+- Config: storage_url
+- Env Var: RCLONE_SWIFT_STORAGE_URL
+- Type: string
+- Default: ""
+
+#### --swift-auth-token
+
+Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+
+- Config: auth_token
+- Env Var: RCLONE_SWIFT_AUTH_TOKEN
+- Type: string
+- Default: ""
+
+#### --swift-auth-version
+
+AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+
+- Config: auth_version
+- Env Var: RCLONE_SWIFT_AUTH_VERSION
+- Type: int
+- Default: 0
+
+#### --swift-endpoint-type
+
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
+
+- Config: endpoint_type
+- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
+- Type: string
+- Default: "public"
+- Examples:
+ - "public"
+ - Public (default, choose this if not sure)
+ - "internal"
+ - Internal (use internal service net)
+ - "admin"
+ - Admin
+
+#### --swift-storage-policy
+
+The storage policy to use when creating a new container
+
+This applies the specified storage policy when creating a new
+container. The policy cannot be changed afterwards. The allowed
+configuration values and their meaning depend on your Swift storage
+provider.
+
+- Config: storage_policy
+- Env Var: RCLONE_SWIFT_STORAGE_POLICY
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Default
+ - "pcs"
+ - OVH Public Cloud Storage
+ - "pca"
+ - OVH Public Cloud Archive
+
+### Advanced Options
+
+Here are the advanced options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
+
+#### --swift-chunk-size
+
+Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.
+- Config: chunk_size
+- Env Var: RCLONE_SWIFT_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5G
+
+
+
### Modified time ###
The modified time is stored as metadata on the object as
@@ -10909,6 +13236,33 @@ Deleted files will be moved to the trash. Your subscription level
will determine how long items stay in the trash. `rclone cleanup` can
be used to empty the trash.
+
+### Standard Options
+
+Here are the standard options specific to pcloud (Pcloud).
+
+#### --pcloud-client-id
+
+Pcloud App Client Id
+Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_PCLOUD_CLIENT_ID
+- Type: string
+- Default: ""
+
+#### --pcloud-client-secret
+
+Pcloud App Client Secret
+Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_PCLOUD_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+
+
SFTP
----------------------------------------
@@ -11052,28 +13406,6 @@ And then at the end of the session
These commands can be used in scripts of course.
-### Specific options ###
-
-Here are the command line options specific to this remote.
-
-#### --sftp-ask-password ####
-
-Ask for the SFTP password if needed when no password has been configured.
-
-#### --ssh-path-override ####
-
-Override path used by SSH connection. Allows checksum calculation when
-SFTP and SSH paths are different. This issue affects among others Synology
-NAS boxes.
-
-Shared folders can be found in directories representing volumes
-
- rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
-
-Home directory can be found in a shared folder called `homes`
-
- rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
-
### Modified time ###
Modified times are stored on the server to 1 second precision.
@@ -11085,6 +13417,127 @@ upload (for example, certain configurations of ProFTPd with mod_sftp). If you
are using one of these servers, you can set the option `set_modtime = false` in
your RClone backend configuration to disable this behaviour.
+
+### Standard Options
+
+Here are the standard options specific to sftp (SSH/SFTP Connection).
+
+#### --sftp-host
+
+SSH host to connect to
+
+- Config: host
+- Env Var: RCLONE_SFTP_HOST
+- Type: string
+- Default: ""
+- Examples:
+ - "example.com"
+ - Connect to example.com
+
+#### --sftp-user
+
+SSH username, leave blank for current username, ncw
+
+- Config: user
+- Env Var: RCLONE_SFTP_USER
+- Type: string
+- Default: ""
+
+#### --sftp-port
+
+SSH port, leave blank to use default (22)
+
+- Config: port
+- Env Var: RCLONE_SFTP_PORT
+- Type: string
+- Default: ""
+
+#### --sftp-pass
+
+SSH password, leave blank to use ssh-agent.
+
+- Config: pass
+- Env Var: RCLONE_SFTP_PASS
+- Type: string
+- Default: ""
+
+#### --sftp-key-file
+
+Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+
+- Config: key_file
+- Env Var: RCLONE_SFTP_KEY_FILE
+- Type: string
+- Default: ""
+
+#### --sftp-use-insecure-cipher
+
+Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+
+- Config: use_insecure_cipher
+- Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Use default Cipher list.
+ - "true"
+ - Enables the use of the aes128-cbc cipher.
+
+#### --sftp-disable-hashcheck
+
+Disable the execution of SSH commands to determine if remote file hashing is available.
+Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
+
+- Config: disable_hashcheck
+- Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
+- Type: bool
+- Default: false
+
+### Advanced Options
+
+Here are the advanced options specific to sftp (SSH/SFTP Connection).
+
+#### --sftp-ask-password
+
+Allow asking for SFTP password when needed.
+
+- Config: ask_password
+- Env Var: RCLONE_SFTP_ASK_PASSWORD
+- Type: bool
+- Default: false
+
+#### --sftp-path-override
+
+Override path used by SSH connection.
+
+This allows checksum calculation when SFTP and SSH paths are
+different. This issue affects among others Synology NAS boxes.
+
+Shared folders can be found in directories representing volumes
+
+ rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
+
+Home directory can be found in a shared folder called "home"
+
+ rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
+
+- Config: path_override
+- Env Var: RCLONE_SFTP_PATH_OVERRIDE
+- Type: string
+- Default: ""
+
+#### --sftp-set-modtime
+
+Set the modified time on the remote if set.
+
+- Config: set_modtime
+- Env Var: RCLONE_SFTP_SET_MODTIME
+- Type: bool
+- Default: true
+
+
+
### Limitations ###
SFTP supports checksums if the same login has shell access and `md5sum`
@@ -11116,6 +13569,162 @@ with it: `--dump-headers`, `--dump-bodies`, `--dump-auth`
Note that `--timeout` isn't supported (but `--contimeout` is).
+Union
+-----------------------------------------
+
+The `union` remote provides a unification similar to UnionFS using other remotes.
+
+Paths may be as deep as required or a local path,
+eg `remote:directory/subdirectory` or `/directory/subdirectory`.
+
+During the initial setup with `rclone config` you will specify the target
+remotes as a space separated list. The target remotes can either be a local paths or other remotes.
+
+The order of the remotes is important as it defines which remotes take precedence over others if there are files with the same name in the same logical path.
+The last remote is the topmost remote and replaces files with the same name from previous remotes.
+
+Only the last remote is used to write to and delete from, all other remotes are read-only.
+
+Subfolders can be used in target remote. Asume a union remote named `backup`
+with the remotes `mydrive:private/backup mydrive2:/backup`. Invoking `rclone mkdir backup:desktop`
+is exactly the same as invoking `rclone mkdir mydrive2:/backup/desktop`.
+
+There will be no special handling of paths containing `..` segments.
+Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking
+`rclone mkdir mydrive2:/backup/../desktop`.
+
+Here is an example of how to make a union called `remote` for local folders.
+First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
+ \ "amazon cloud drive"
+ 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
+ \ "s3"
+ 4 / Backblaze B2
+ \ "b2"
+ 5 / Box
+ \ "box"
+ 6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes
+ \ "union"
+ 7 / Cache a remote
+ \ "cache"
+ 8 / Dropbox
+ \ "dropbox"
+ 9 / Encrypt/Decrypt a remote
+ \ "crypt"
+10 / FTP Connection
+ \ "ftp"
+11 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+12 / Google Drive
+ \ "drive"
+13 / Hubic
+ \ "hubic"
+14 / JottaCloud
+ \ "jottacloud"
+15 / Local Disk
+ \ "local"
+16 / Mega
+ \ "mega"
+17 / Microsoft Azure Blob Storage
+ \ "azureblob"
+18 / Microsoft OneDrive
+ \ "onedrive"
+19 / OpenDrive
+ \ "opendrive"
+20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+21 / Pcloud
+ \ "pcloud"
+22 / QingCloud Object Storage
+ \ "qingstor"
+23 / SSH/SFTP Connection
+ \ "sftp"
+24 / Webdav
+ \ "webdav"
+25 / Yandex Disk
+ \ "yandex"
+26 / http Connection
+ \ "http"
+Storage> union
+List of space separated remotes.
+Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc.
+The last remote is used to write to.
+Enter a string value. Press Enter for the default ("").
+remotes>
+Remote config
+--------------------
+[remote]
+type = union
+remotes = C:\dir1 C:\dir2 C:\dir3
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+remote union
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+```
+
+Once configured you can then use `rclone` like this,
+
+List directories in top level in `C:\dir1`, `C:\dir2` and `C:\dir3`
+
+ rclone lsd remote:
+
+List all the files in `C:\dir1`, `C:\dir2` and `C:\dir3`
+
+ rclone ls remote:
+
+Copy another local directory to the union directory called source, which will be placed into `C:\dir3`
+
+ rclone copy C:\source remote:source
+
+
+### Standard Options
+
+Here are the standard options specific to union (A stackable unification remote, which can appear to merge the contents of several remotes).
+
+#### --union-remotes
+
+List of space separated remotes.
+Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc.
+The last remote is used to write to.
+
+- Config: remotes
+- Env Var: RCLONE_UNION_REMOTES
+- Type: string
+- Default: ""
+
+
+
WebDAV
-----------------------------------------
@@ -11213,6 +13822,70 @@ Owncloud or Nextcloud rclone will support modified times.
Hashes are not supported.
+
+### Standard Options
+
+Here are the standard options specific to webdav (Webdav).
+
+#### --webdav-url
+
+URL of http host to connect to
+
+- Config: url
+- Env Var: RCLONE_WEBDAV_URL
+- Type: string
+- Default: ""
+- Examples:
+ - "https://example.com"
+ - Connect to example.com
+
+#### --webdav-vendor
+
+Name of the Webdav site/service/software you are using
+
+- Config: vendor
+- Env Var: RCLONE_WEBDAV_VENDOR
+- Type: string
+- Default: ""
+- Examples:
+ - "nextcloud"
+ - Nextcloud
+ - "owncloud"
+ - Owncloud
+ - "sharepoint"
+ - Sharepoint
+ - "other"
+ - Other site/service or software
+
+#### --webdav-user
+
+User name
+
+- Config: user
+- Env Var: RCLONE_WEBDAV_USER
+- Type: string
+- Default: ""
+
+#### --webdav-pass
+
+Password.
+
+- Config: pass
+- Env Var: RCLONE_WEBDAV_PASS
+- Type: string
+- Default: ""
+
+#### --webdav-bearer-token
+
+Bearer token instead of user/pass (eg a Macaroon)
+
+- Config: bearer_token
+- Env Var: RCLONE_WEBDAV_BEARER_TOKEN
+- Type: string
+- Default: ""
+
+
+
## Provider notes ##
See below for notes on specific providers.
@@ -11446,6 +14119,33 @@ If you wish to empty your trash you can use the `rclone cleanup remote:`
command which will permanently delete all your trashed files. This command
does not take any path arguments.
+
+### Standard Options
+
+Here are the standard options specific to yandex (Yandex Disk).
+
+#### --yandex-client-id
+
+Yandex Client Id
+Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_YANDEX_CLIENT_ID
+- Type: string
+- Default: ""
+
+#### --yandex-client-secret
+
+Yandex Client Secret
+Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_YANDEX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+
+
Local Filesystem
-------------------------------------------
@@ -11517,17 +14217,13 @@ This will use UNC paths on `c:\src` but not on `z:\dst`.
Of course this will cause problems if the absolute path length of a
file exceeds 258 characters on z, so only use this option if you have to.
-### Specific options ###
-
-Here are the command line options specific to local storage
-
-#### --copy-links, -L ####
+### Symlinks / Junction points
Normally rclone will ignore symlinks or junction points (which behave
like symlinks under Windows).
-If you supply this flag then rclone will follow the symlink and copy
-the pointed to file or directory.
+If you supply `--copy-links` or `-L` then rclone will follow the
+symlink and copy the pointed to file or directory.
This flag applies to all commands.
@@ -11562,28 +14258,13 @@ $ rclone -L ls /tmp/a
6 b/one
```
-#### --local-no-check-updated ####
+### Restricting filesystems with --one-file-system
-Don't check to see if the files change during upload.
+Normally rclone will recurse through filesystems as mounted.
-Normally rclone checks the size and modification time of files as they
-are being uploaded and aborts with a message which starts `can't copy
-- source file is being updated` if the file changes during upload.
-
-However on some file systems this modification time check may fail (eg
-[Glusterfs #2206](https://github.com/ncw/rclone/issues/2206)) so this
-check can be disabled with this flag.
-
-#### --local-no-unicode-normalization ####
-
-This flag is deprecated now. Rclone no longer normalizes unicode file
-names, but it compares them with unicode normalization in the sync
-routine instead.
-
-#### --one-file-system, -x ####
-
-This tells rclone to stay in the filesystem specified by the root and
-not to recurse into different file systems.
+However if you set `--one-file-system` or `-x` this tells rclone to
+stay in the filesystem specified by the root and not to recurse into
+different file systems.
For example if you have a directory hierarchy like this
@@ -11618,17 +14299,206 @@ treats a bind mount to the same device as being on the same
filesystem.
**NB** This flag is only available on Unix based systems. On systems
-where it isn't supported (eg Windows) it will not appear as an valid
-flag.
+where it isn't supported (eg Windows) it will be ignored.
-#### --skip-links ####
+
+### Standard Options
+Here are the standard options specific to local (Local Disk).
+
+#### --local-nounc
+
+Disable UNC (long path names) conversion on Windows
+
+- Config: nounc
+- Env Var: RCLONE_LOCAL_NOUNC
+- Type: string
+- Default: ""
+- Examples:
+ - "true"
+ - Disables long file names
+
+### Advanced Options
+
+Here are the advanced options specific to local (Local Disk).
+
+#### --copy-links
+
+Follow symlinks and copy the pointed to item.
+
+- Config: copy_links
+- Env Var: RCLONE_LOCAL_COPY_LINKS
+- Type: bool
+- Default: false
+
+#### --skip-links
+
+Don't warn about skipped symlinks.
This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.
+- Config: skip_links
+- Env Var: RCLONE_LOCAL_SKIP_LINKS
+- Type: bool
+- Default: false
+
+#### --local-no-unicode-normalization
+
+Don't apply unicode normalization to paths and filenames (Deprecated)
+
+This flag is deprecated now. Rclone no longer normalizes unicode file
+names, but it compares them with unicode normalization in the sync
+routine instead.
+
+- Config: no_unicode_normalization
+- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+- Type: bool
+- Default: false
+
+#### --local-no-check-updated
+
+Don't check to see if the files change during upload
+
+Normally rclone checks the size and modification time of files as they
+are being uploaded and aborts with a message which starts "can't copy
+- source file is being updated" if the file changes during upload.
+
+However on some file systems this modification time check may fail (eg
+[Glusterfs #2206](https://github.com/ncw/rclone/issues/2206)) so this
+check can be disabled with this flag.
+
+- Config: no_check_updated
+- Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
+- Type: bool
+- Default: false
+
+#### --one-file-system
+
+Don't cross filesystem boundaries (unix/macOS only).
+
+- Config: one_file_system
+- Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
+- Type: bool
+- Default: false
+
+
+
# Changelog
-## v1.42 - 2018-09-01
+## v1.44 - 2018-10-15
+
+* New commands
+ * serve ftp: Add ftp server (Antoine GIRARD)
+ * settier: perform storage tier changes on supported remotes (sandeepkru)
+* New Features
+ * Reworked command line help
+ * Make default help less verbose (Nick Craig-Wood)
+ * Split flags up into global and backend flags (Nick Craig-Wood)
+ * Implement specialised help for flags and backends (Nick Craig-Wood)
+ * Show URL of backend help page when starting config (Nick Craig-Wood)
+ * stats: Long names now split in center (Joanna Marek)
+ * Add --log-format flag for more control over log output (dcpu)
+ * rc: Add support for OPTIONS and basic CORS (frenos)
+ * stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
+* Bug Fixes
+ * Fix -P not ending with a new line (Nick Craig-Wood)
+ * config: don't create default config dir when user supplies --config (albertony)
+ * Don't print non-ASCII characters with --progress on windows (Nick Craig-Wood)
+ * Correct logs for excluded items (ssaqua)
+* Mount
+ * Remove EXPERIMENTAL tags (Nick Craig-Wood)
+* VFS
+ * Fix race condition detected by serve ftp tests (Nick Craig-Wood)
+ * Add vfs/poll-interval rc command (Fabian Möller)
+ * Enable rename for nearly all remotes using server side Move or Copy (Nick Craig-Wood)
+ * Reduce directory cache cleared by poll-interval (Fabian Möller)
+ * Remove EXPERIMENTAL tags (Nick Craig-Wood)
+* Local
+ * Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
+ * Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood)
+ * Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
+* Cache
+ * Add cache/fetch rc function (Fabian Möller)
+ * Fix worker scale down (Fabian Möller)
+ * Improve performance by not sending info requests for cached chunks (dcpu)
+ * Fix error return value of cache/fetch rc method (Fabian Möller)
+ * Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal)
+ * Preserve leading / in wrapped remote path (Fabian Möller)
+ * Add plex_insecure option to skip certificate validation (Fabian Möller)
+ * Remove entries that no longer exist in the source (dcpu)
+* Crypt
+ * Preserve leading / in wrapped remote path (Fabian Möller)
+* Alias
+ * Fix handling of Windows network paths (Nick Craig-Wood)
+* Azure Blob
+ * Add --azureblob-list-chunk parameter (Santiago Rodríguez)
+ * Implemented settier command support on azureblob remote. (sandeepkru)
+ * Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood)
+* Box
+ * Implement link sharing. (Sebastian Bünger)
+* Drive
+ * Add --drive-import-formats - google docs can now be imported (Fabian Möller)
+ * Rewrite mime type and extension handling (Fabian Möller)
+ * Add document links (Fabian Möller)
+ * Add support for multipart document extensions (Fabian Möller)
+ * Add support for apps-script to json export (Fabian Möller)
+ * Fix escaped chars in documents during list (Fabian Möller)
+ * Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller)
+ * Improve directory notifications in ChangeNotify (Fabian Möller)
+ * When listing team drives in config, continue on failure (Nick Craig-Wood)
+* FTP
+ * Add a small pause after failed upload before deleting file (Nick Craig-Wood)
+* Google Cloud Storage
+ * Fix service_account_file being ignored (Fabian Möller)
+* Jottacloud
+ * Minor improvement in quota info (omit if unlimited) (albertony)
+ * Add --fast-list support (albertony)
+ * Add permanent delete support: --jottacloud-hard-delete (albertony)
+ * Add link sharing support (albertony)
+ * Fix handling of reserved characters. (Sebastian Bünger)
+ * Fix socket leak on Object.Remove (Nick Craig-Wood)
+* Onedrive
+ * Rework to support Microsoft Graph (Cnly)
+ * **NB** this will require re-authenticating the remote
+ * Removed upload cutoff and always do session uploads (Oliver Heyme)
+ * Use single-part upload for empty files (Cnly)
+ * Fix new fields not saved when editing old config (Alex Chen)
+ * Fix sometimes special chars in filenames not replaced (Alex Chen)
+ * Ignore OneNote files by default (Alex Chen)
+ * Add link sharing support (jackyzy823)
+* S3
+ * Use custom pacer, to retry operations when reasonable (Craig Miskell)
+ * Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout)
+ * Make --s3-v2-auth flag (Nick Craig-Wood)
+ * Fix v2 auth on files with spaces (Nick Craig-Wood)
+* Union
+ * Implement union backend which reads from multiple backends (Felix Brucker)
+ * Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood)
+ * Fix ChangeNotify to support multiple remotes (Fabian Möller)
+ * Fix --backup-dir on union backend (Nick Craig-Wood)
+* WebDAV
+ * Add another time format (Nick Craig-Wood)
+ * Add a small pause after failed upload before deleting file (Nick Craig-Wood)
+ * Add workaround for missing mtime (buergi)
+ * Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
+* Yandex
+ * Remove redundant nil checks (teresy)
+
+## v1.43.1 - 2018-09-07
+
+Point release to fix hubic and azureblob backends.
+
+* Bug Fixes
+ * ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
+ * cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)
+ * docs: Tidy website display (Anagh Kumar Baranwal)
+* Azure Blob:
+ * Fix multi-part uploads. (sandeepkru)
+* Hubic
+ * Fix uploads (Nick Craig-Wood)
+ * Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood)
+
+## v1.43 - 2018-09-01
* New backends
* Jottacloud (Sebastian Bünger)
@@ -13393,7 +16263,7 @@ Contributors
* themylogin
* Onno Zweers
* Jasper Lievisse Adriaanse
- * sandeepkru
+ * sandeepkru
* HerrH
* Andrew <4030760+sparkyman215@users.noreply.github.com>
* dan smith
@@ -13408,6 +16278,28 @@ Contributors
* Alex Chen
* Denis
* bsteiss <35940619+bsteiss@users.noreply.github.com>
+ * Cédric Connes
+ * Dr. Tobias Quathamer
+ * dcpu <42736967+dcpu@users.noreply.github.com>
+ * Sheldon Rupp
+ * albertony <12441419+albertony@users.noreply.github.com>
+ * cron410
+ * Anagh Kumar Baranwal
+ * Felix Brucker
+ * Santiago Rodríguez
+ * Craig Miskell
+ * Antoine GIRARD
+ * Joanna Marek
+ * frenos
+ * ssaqua
+ * xnaas
+ * Frantisek Fuka
+ * Paul Kohout
+ * dcpu <43330287+dcpu@users.noreply.github.com>
+ * jackyzy823
+ * David Haguenauer
+ * teresy
+ * buergi
# Contact the rclone project #
diff --git a/MANUAL.txt b/MANUAL.txt
index 323ba789d..1579ecff5 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Sep 01, 2018
+Oct 15, 2018
@@ -57,14 +57,15 @@ Features
- Sync (one way) mode to make a directory identical
- Check mode to check for file hash equality
- Can sync to and from network, eg two different cloud accounts
-- Optional encryption (Crypt)
-- Optional cache (Cache)
+- (Encryption) backend
+- (Cache) backend
+- (Union) backend
- Optional FUSE mount (rclone mount)
Links
- Home page
-- Github project page for source and bug tracker
+- GitHub project page for source and bug tracker
- Rclone Forum
- Google+ page
- Downloads
@@ -229,6 +230,7 @@ See the following for detailed instructions for
- Pcloud
- QingStor
- SFTP
+- Union
- WebDAV
- Yandex Disk
- The local filesystem
@@ -1655,15 +1657,13 @@ Options
rclone mount
-Mount the remote as a mountpoint. EXPERIMENTAL
+Mount the remote as file system on a mountpoint.
Synopsis
rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of
Rclone's cloud storage systems as a file system with FUSE.
-This is EXPERIMENTAL - use with care.
-
First set up your remote using rclone config. Check it works with
rclone ls etc.
@@ -1735,8 +1735,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy commands
cope with this with lots of retries. However rclone mount can't use
retries in the same way without making local copies of the uploads. Look
-at the EXPERIMENTAL file caching for solutions to make mount mount more
-reliable.
+at the file caching for solutions to make mount mount more reliable.
Attribute caching
@@ -1845,8 +1844,6 @@ used. The maximum memory used by rclone for buffering can be up to
File Caching
-NB File caching is EXPERIMENTAL - use with care!
-
These flags control the VFS file caching options. The VFS layer is used
by rclone mount to make a cloud storage system work more like a normal
file system.
@@ -2162,6 +2159,183 @@ Options
-h, --help help for serve
+rclone serve ftp
+
+Serve remote:path over FTP.
+
+Synopsis
+
+rclone serve ftp implements a basic ftp server to serve the remote over
+FTP protocol. This can be viewed with a ftp client or you can make a
+remote of type ftp to read and write it.
+
+Server options
+
+Use --addr to specify which IP address and port the server should listen
+on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
+default it only listens on localhost. You can use port :0 to let the OS
+choose an available port.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication is advised - see the next section for info.
+
+Authentication
+
+By default this will serve files without needing a login.
+
+You can set a single username and password with the --user and --pass
+flags.
+
+Directory Cache
+
+Using the --dir-cache-time flag, you can set how long a directory should
+be considered up to date and not refreshed from the backend. Changes
+made locally in the mount may appear immediately or invalidate the
+cache. However, changes done on the remote will only be picked up once
+the cache expires.
+
+Alternatively, you can send a SIGHUP signal to rclone for it to flush
+all directory caches, regardless of how old they are. Assuming only one
+rclone instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+File Buffering
+
+The --buffer-size flag determines the amount of memory, that will be
+used to buffer data in advance.
+
+Each open file descriptor will try to keep the specified amount of data
+in memory at all times. The buffered data is bound to one file
+descriptor and won't be shared between multiple open file descriptors of
+the same file.
+
+This flag is a upper limit for the used memory per file descriptor. The
+buffer will only use memory for data that is downloaded but not not yet
+read. If the buffer is empty, only a small amount of memory will be
+used. The maximum memory used by rclone for buffering can be up to
+--buffer-size * open files.
+
+File Caching
+
+These flags control the VFS file caching options. The VFS layer is used
+by rclone mount to make a cloud storage system work more like a normal
+file system.
+
+You'll need to enable VFS caching if you want, for example, to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with -vv rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with --cache-dir or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by --vfs-cache-mode. The higher
+the cache mode the more compatible rclone becomes at the cost of using
+disk space.
+
+Note that files are written back to the remote only when they are closed
+so if rclone is quit or dies with open files then these won't get
+written back to the remote. However they will still be in the on disk
+cache.
+
+--vfs-cache-mode off
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for write
+will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+--vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When a
+file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at the
+cache backend which does a much more sophisticated job of caching,
+including caching directory hierarchies and chunks of files.
+
+In this mode, unlike the others, when a file is written to the disk, it
+will be kept on the disk after it is written to the remote. It will be
+purged on a schedule according to --vfs-cache-max-age.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+ rclone serve ftp remote:path [flags]
+
+Options
+
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for ftp
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --pass string Password for authentication. (empty value allow every password)
+ --passive-port string Passive port range to use. (default "30000-32000")
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication. (default "anonymous")
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
+
+
rclone serve http
Serve the remote over HTTP.
@@ -2271,8 +2445,6 @@ used. The maximum memory used by rclone for buffering can be up to
File Caching
-NB File caching is EXPERIMENTAL - use with care!
-
These flags control the VFS file caching options. The VFS layer is used
by rclone mount to make a cloud storage system work more like a normal
file system.
@@ -2649,8 +2821,6 @@ used. The maximum memory used by rclone for buffering can be up to
File Caching
-NB File caching is EXPERIMENTAL - use with care!
-
These flags control the VFS file caching options. The VFS layer is used
by rclone mount to make a cloud storage system work more like a normal
file system.
@@ -2768,6 +2938,42 @@ Options
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
+rclone settier
+
+Changes storage class/tier of objects in remote.
+
+Synopsis
+
+rclone settier changes storage tier or class at remote if supported. Few
+cloud storage services provides different storage classes on objects,
+for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and
+Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
+
+Note that, certain tier chages make objects not available to access
+immediately. For example tiering to archive in azure blob storage makes
+objects in frozen state, user can restore by setting tier to Hot/Cool,
+similarly S3 to Glacier makes object inaccessible.true
+
+You can use it to tier single object
+
+ rclone settier Cool remote:path/file
+
+Or use rclone filters to set tier on only specific files
+
+ rclone --include "*.txt" settier Hot remote:path/dir
+
+Or just provide remote directory and all files in directory will be
+tiered
+
+ rclone settier tier remote:path/dir
+
+ rclone settier tier remote:path [flags]
+
+Options
+
+ -h, --help help for settier
+
+
rclone touch
Create new file or change file modification time.
@@ -3282,6 +3488,11 @@ Note that if you are using the logrotate program to manage rclone's
logs, then you should use the copytruncate option as rclone doesn't have
a signal to rotate logs.
+--log-format LIST
+
+Comma separated list of log format options. date, time, microseconds,
+longfile, shortfile, UTC. The default is "date,time".
+
--log-level LEVEL
This sets the log level for rclone. The default log level is NOTICE.
@@ -3391,7 +3602,7 @@ files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also
(eg the Google Drive client).
---P, --progress
+-P, --progress
This flag makes rclone update the stats in a static block in the
terminal providing a realtime overview of the transfer.
@@ -3405,6 +3616,9 @@ with the --stats flag.
This can be used with the --stats-one-line flag for a simpler display.
+Note: On Windows untilthis bug is fixed all non-ASCII characters will be
+replaced with . when --progress is in use.
+
-q, --quiet
Normally rclone outputs stats and a completion message. If you set this
@@ -3556,7 +3770,8 @@ will be considered.
If the destination does not support server-side copy or move, rclone
will fall back to the default behaviour and log an error level message
-to the console.
+to the console. Note: Encrypted destinations are not supported by
+--track-renames.
Note that --track-renames uses extra memory to keep track of all the
rename candidates.
@@ -4629,6 +4844,32 @@ Eg
rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true
+cache/fetch: Fetch file chunks
+
+Ensure the specified file chunks are cached on disk.
+
+The chunks= parameter specifies the file chunks to check. It takes a
+comma separated list of array slice indices. The slice indices are
+similar to Python slices: start[:end]
+
+start is the 0 based chunk number from the beginning of the file to
+fetch inclusive. end is 0 based chunk number from the beginning of the
+file to fetch exclisive. Both values can be negative, in which case they
+count from the back of the file. The value "-5:" represents the last 5
+chunks of a file.
+
+Some valid examples are: ":5,-5:" -> the first and last five chunks
+"0,-2" -> the first and the second last chunk "0:10" -> the first ten
+chunks
+
+Any parameter with a key that starts with "file" can be used to specify
+files to fetch, eg
+
+ rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
+
+File names will automatically be encrypted when the a crypt remote is
+used on top of the cache.
+
cache/stats: Get cache stats
Show statistics for the cache remote.
@@ -4681,6 +4922,8 @@ Returns the following values:
"speed": average speed in bytes/sec since start of the process,
"bytes": total transferred bytes since the start of the process,
"errors": number of errors,
+ "fatalError": whether there has been at least one FatalError,
+ "retryError": whether there has been at least one non-NoRetryError,
"checks": number of checked files,
"transfers": number of transferred files,
"deletes" : number of deleted files,
@@ -4738,6 +4981,27 @@ will forget that dir, eg
rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
+vfs/poll-interval: Get the status or update the value of the poll-interval option.
+
+Without any parameter given this returns the current status of the
+poll-interval setting.
+
+When the interval=duration parameter is set, the poll-interval value is
+updated and the polling function is notified. Setting interval=0
+disables poll-interval.
+
+ rclone rc vfs/poll-interval interval=5m
+
+The timeout=duration parameter can be used to specify a time to wait for
+the current poll function to apply the new value. If timeout is less or
+equal 0, which is the default, wait indefinitely.
+
+The new poll-interval value will only be active when the timeout is not
+reached.
+
+If poll-interval is updated or disabled temporarily, some changes might
+not get picked up by the polling function, depending on the used remote.
+
vfs/refresh: Refresh the directory cache.
This reads the directories for the specified paths and freshens the
@@ -4775,6 +5039,10 @@ formatted to be reasonably human readable.
If an error occurs then there will be an HTTP error status (usually 400)
and the body of the response will contain a JSON encoded error object.
+The sever implements basic CORS support and allows all origins for that.
+The response to a preflight OPTIONS request will echo the requested
+"Access-Control-Request-Headers" back.
+
Using POST with URL parameters only
curl -X POST 'http://localhost:5572/rc/noop/?potato=1&sausage=2'
@@ -5027,17 +5295,17 @@ more efficient.
Amazon Drive Yes No Yes Yes No #575 No No No #2178 No
Amazon S3 No Yes No No No Yes Yes No #2178 No
Backblaze B2 No No No No Yes Yes Yes No #2178 No
- Box Yes Yes Yes Yes No #575 No Yes No #2178 No
+ Box Yes Yes Yes Yes No #575 No Yes Yes No
Dropbox Yes Yes Yes Yes No #575 No Yes Yes Yes
FTP No No Yes Yes No No Yes No #2178 No
Google Cloud Storage Yes Yes No No No Yes Yes No #2178 No
Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes
HTTP No No No No No No No No #2178 No
Hubic Yes † Yes No No No Yes Yes No #2178 Yes
- Jottacloud Yes Yes Yes Yes No No No No No
+ Jottacloud Yes Yes Yes Yes No Yes No Yes Yes
Mega Yes No Yes Yes No No No No #2178 Yes
Microsoft Azure Blob Storage Yes Yes No No No Yes No No #2178 No
- Microsoft OneDrive Yes Yes Yes Yes No #575 No No No #2178 Yes
+ Microsoft OneDrive Yes Yes Yes Yes No #575 No No Yes Yes
OpenDrive Yes Yes Yes Yes No No No No No
Openstack Swift Yes † Yes No No No Yes Yes No #2178 Yes
pCloud Yes Yes Yes Yes Yes No No No #2178 Yes
@@ -5237,6 +5505,21 @@ Copy another local directory to the alias directory called source
rclone copy /home/source remote:source
+Standard Options
+
+Here are the standard options specific to alias (Alias for a existing
+remote).
+
+--alias-remote
+
+Remote or path to alias. Can be "myremote:path/to/dir",
+"myremote:bucket", "myremote:" or "/local/path".
+
+- Config: remote
+- Env Var: RCLONE_ALIAS_REMOTE
+- Type: string
+- Default: ""
+
Amazon Drive
@@ -5402,22 +5685,65 @@ Let's say you usually use amazon.co.uk. When you authenticate with
rclone it will take you to an amazon.com page to log in. Your
amazon.co.uk email and password should work here just fine.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to amazon cloud drive (Amazon
+Drive).
---acd-templink-threshold=SIZE
+--acd-client-id
-Files this size or more will be downloaded via their tempLink. This is
-to work around a problem with Amazon Drive which blocks downloads of
-files bigger than about 10GB. The default for this is 9GB which
-shouldn't need to be changed.
+Amazon Application Client ID.
-To download files above this threshold, rclone requests a tempLink which
-downloads the file through a temporary URL directly from the underlying
-S3 storage.
+- Config: client_id
+- Env Var: RCLONE_ACD_CLIENT_ID
+- Type: string
+- Default: ""
---acd-upload-wait-per-gb=TIME
+--acd-client-secret
+
+Amazon Application Client Secret.
+
+- Config: client_secret
+- Env Var: RCLONE_ACD_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to amazon cloud drive (Amazon
+Drive).
+
+--acd-auth-url
+
+Auth server URL. Leave blank to use Amazon's.
+
+- Config: auth_url
+- Env Var: RCLONE_ACD_AUTH_URL
+- Type: string
+- Default: ""
+
+--acd-token-url
+
+Token server url. leave blank to use Amazon's.
+
+- Config: token_url
+- Env Var: RCLONE_ACD_TOKEN_URL
+- Type: string
+- Default: ""
+
+--acd-checkpoint
+
+Checkpoint for internal polling (debug).
+
+- Config: checkpoint
+- Env Var: RCLONE_ACD_CHECKPOINT
+- Type: string
+- Default: ""
+
+--acd-upload-wait-per-gb
+
+Additional time per GB to wait after a failed complete upload to see if
+it appears.
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This happens
@@ -5435,9 +5761,32 @@ appear correctly eventually.
These values were determined empirically by observing lots of uploads of
big files for a range of file sizes.
-Upload with the -v flag to see more info about what rclone is doing in
+Upload with the "-v" flag to see more info about what rclone is doing in
this situation.
+- Config: upload_wait_per_gb
+- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
+- Type: Duration
+- Default: 3m0s
+
+--acd-templink-threshold
+
+Files >= this size will be downloaded via their tempLink.
+
+Files this size or more will be downloaded via their "tempLink". This is
+to work around a problem with Amazon Drive which blocks downloads of
+files bigger than about 10GB. The default for this is 9GB which
+shouldn't need to be changed.
+
+To download files above this threshold, rclone requests a "tempLink"
+which downloads the file through a temporary URL directly from the
+underlying S3 storage.
+
+- Config: templink_threshold
+- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
+- Type: SizeSuffix
+- Default: 9G
+
Limitations
Note that Amazon Drive is case insensitive so you can't have a file
@@ -5827,55 +6176,562 @@ tries to access the data you will see an error like below.
In this case you need to restore the object(s) in question before using
rclone.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to s3 (Amazon S3 Compliant
+Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
---s3-acl=STRING
+--s3-provider
-Canned ACL used when creating buckets and/or storing objects in S3.
+Choose your S3 provider.
-For more info visit the canned ACL docs.
+- Config: provider
+- Env Var: RCLONE_S3_PROVIDER
+- Type: string
+- Default: ""
+- Examples:
+ - "AWS"
+ - Amazon Web Services (AWS) S3
+ - "Ceph"
+ - Ceph Object Storage
+ - "DigitalOcean"
+ - Digital Ocean Spaces
+ - "Dreamhost"
+ - Dreamhost DreamObjects
+ - "IBMCOS"
+ - IBM COS S3
+ - "Minio"
+ - Minio Object Storage
+ - "Wasabi"
+ - Wasabi Object Storage
+ - "Other"
+ - Any other S3 compatible provider
---s3-storage-class=STRING
+--s3-env-auth
-Storage class to upload new objects with.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta
+data if no env vars). Only applies if access_key_id and
+secret_access_key is blank.
-Available options include:
+- Config: env_auth
+- Env Var: RCLONE_S3_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter AWS credentials in the next step
+ - "true"
+ - Get AWS credentials from the environment (env vars or IAM)
-- STANDARD - default storage class
-- STANDARD_IA - for less frequently accessed data (e.g backups)
-- ONEZONE_IA - for storing data in only one Availability Zone
-- REDUCED_REDUNDANCY (only for noncritical, reproducible data, has
- lower redundancy)
+--s3-access-key-id
---s3-chunk-size=SIZE
+AWS Access Key ID. Leave blank for anonymous access or runtime
+credentials.
+
+- Config: access_key_id
+- Env Var: RCLONE_S3_ACCESS_KEY_ID
+- Type: string
+- Default: ""
+
+--s3-secret-access-key
+
+AWS Secret Access Key (password) Leave blank for anonymous access or
+runtime credentials.
+
+- Config: secret_access_key
+- Env Var: RCLONE_S3_SECRET_ACCESS_KEY
+- Type: string
+- Default: ""
+
+--s3-region
+
+Region to connect to.
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Type: string
+- Default: ""
+- Examples:
+ - "us-east-1"
+ - The default endpoint - a good choice if you are unsure.
+ - US Region, Northern Virginia or Pacific Northwest.
+ - Leave location constraint empty.
+ - "us-east-2"
+ - US East (Ohio) Region
+ - Needs location constraint us-east-2.
+ - "us-west-2"
+ - US West (Oregon) Region
+ - Needs location constraint us-west-2.
+ - "us-west-1"
+ - US West (Northern California) Region
+ - Needs location constraint us-west-1.
+ - "ca-central-1"
+ - Canada (Central) Region
+ - Needs location constraint ca-central-1.
+ - "eu-west-1"
+ - EU (Ireland) Region
+ - Needs location constraint EU or eu-west-1.
+ - "eu-west-2"
+ - EU (London) Region
+ - Needs location constraint eu-west-2.
+ - "eu-central-1"
+ - EU (Frankfurt) Region
+ - Needs location constraint eu-central-1.
+ - "ap-southeast-1"
+ - Asia Pacific (Singapore) Region
+ - Needs location constraint ap-southeast-1.
+ - "ap-southeast-2"
+ - Asia Pacific (Sydney) Region
+ - Needs location constraint ap-southeast-2.
+ - "ap-northeast-1"
+ - Asia Pacific (Tokyo) Region
+ - Needs location constraint ap-northeast-1.
+ - "ap-northeast-2"
+ - Asia Pacific (Seoul)
+ - Needs location constraint ap-northeast-2.
+ - "ap-south-1"
+ - Asia Pacific (Mumbai)
+ - Needs location constraint ap-south-1.
+ - "sa-east-1"
+ - South America (Sao Paulo) Region
+ - Needs location constraint sa-east-1.
+
+--s3-region
+
+Region to connect to. Leave blank if you are using an S3 clone and you
+don't have a region.
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Use this if unsure. Will use v4 signatures and an empty
+ region.
+ - "other-v2-signature"
+ - Use this only if v4 signatures don't work, eg pre Jewel/v10
+ CEPH.
+
+--s3-endpoint
+
+Endpoint for S3 API. Leave blank if using AWS to use the default
+endpoint for the region.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+
+--s3-endpoint
+
+Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+- Examples:
+ - "s3-api.us-geo.objectstorage.softlayer.net"
+ - US Cross Region Endpoint
+ - "s3-api.dal.us-geo.objectstorage.softlayer.net"
+ - US Cross Region Dallas Endpoint
+ - "s3-api.wdc-us-geo.objectstorage.softlayer.net"
+ - US Cross Region Washington DC Endpoint
+ - "s3-api.sjc-us-geo.objectstorage.softlayer.net"
+ - US Cross Region San Jose Endpoint
+ - "s3-api.us-geo.objectstorage.service.networklayer.com"
+ - US Cross Region Private Endpoint
+ - "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
+ - US Cross Region Dallas Private Endpoint
+ - "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
+ - US Cross Region Washington DC Private Endpoint
+ - "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
+ - US Cross Region San Jose Private Endpoint
+ - "s3.us-east.objectstorage.softlayer.net"
+ - US Region East Endpoint
+ - "s3.us-east.objectstorage.service.networklayer.com"
+ - US Region East Private Endpoint
+ - "s3.us-south.objectstorage.softlayer.net"
+ - US Region South Endpoint
+ - "s3.us-south.objectstorage.service.networklayer.com"
+ - US Region South Private Endpoint
+ - "s3.eu-geo.objectstorage.softlayer.net"
+ - EU Cross Region Endpoint
+ - "s3.fra-eu-geo.objectstorage.softlayer.net"
+ - EU Cross Region Frankfurt Endpoint
+ - "s3.mil-eu-geo.objectstorage.softlayer.net"
+ - EU Cross Region Milan Endpoint
+ - "s3.ams-eu-geo.objectstorage.softlayer.net"
+ - EU Cross Region Amsterdam Endpoint
+ - "s3.eu-geo.objectstorage.service.networklayer.com"
+ - EU Cross Region Private Endpoint
+ - "s3.fra-eu-geo.objectstorage.service.networklayer.com"
+ - EU Cross Region Frankfurt Private Endpoint
+ - "s3.mil-eu-geo.objectstorage.service.networklayer.com"
+ - EU Cross Region Milan Private Endpoint
+ - "s3.ams-eu-geo.objectstorage.service.networklayer.com"
+ - EU Cross Region Amsterdam Private Endpoint
+ - "s3.eu-gb.objectstorage.softlayer.net"
+ - Great Britan Endpoint
+ - "s3.eu-gb.objectstorage.service.networklayer.com"
+ - Great Britan Private Endpoint
+ - "s3.ap-geo.objectstorage.softlayer.net"
+ - APAC Cross Regional Endpoint
+ - "s3.tok-ap-geo.objectstorage.softlayer.net"
+ - APAC Cross Regional Tokyo Endpoint
+ - "s3.hkg-ap-geo.objectstorage.softlayer.net"
+ - APAC Cross Regional HongKong Endpoint
+ - "s3.seo-ap-geo.objectstorage.softlayer.net"
+ - APAC Cross Regional Seoul Endpoint
+ - "s3.ap-geo.objectstorage.service.networklayer.com"
+ - APAC Cross Regional Private Endpoint
+ - "s3.tok-ap-geo.objectstorage.service.networklayer.com"
+ - APAC Cross Regional Tokyo Private Endpoint
+ - "s3.hkg-ap-geo.objectstorage.service.networklayer.com"
+ - APAC Cross Regional HongKong Private Endpoint
+ - "s3.seo-ap-geo.objectstorage.service.networklayer.com"
+ - APAC Cross Regional Seoul Private Endpoint
+ - "s3.mel01.objectstorage.softlayer.net"
+ - Melbourne Single Site Endpoint
+ - "s3.mel01.objectstorage.service.networklayer.com"
+ - Melbourne Single Site Private Endpoint
+ - "s3.tor01.objectstorage.softlayer.net"
+ - Toronto Single Site Endpoint
+ - "s3.tor01.objectstorage.service.networklayer.com"
+ - Toronto Single Site Private Endpoint
+
+--s3-endpoint
+
+Endpoint for S3 API. Required when using an S3 clone.
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Type: string
+- Default: ""
+- Examples:
+ - "objects-us-west-1.dream.io"
+ - Dream Objects endpoint
+ - "nyc3.digitaloceanspaces.com"
+ - Digital Ocean Spaces New York 3
+ - "ams3.digitaloceanspaces.com"
+ - Digital Ocean Spaces Amsterdam 3
+ - "sgp1.digitaloceanspaces.com"
+ - Digital Ocean Spaces Singapore 1
+ - "s3.wasabisys.com"
+ - Wasabi Object Storage
+
+--s3-location-constraint
+
+Location constraint - must be set to match the Region. Used when
+creating buckets only.
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Empty for US Region, Northern Virginia or Pacific Northwest.
+ - "us-east-2"
+ - US East (Ohio) Region.
+ - "us-west-2"
+ - US West (Oregon) Region.
+ - "us-west-1"
+ - US West (Northern California) Region.
+ - "ca-central-1"
+ - Canada (Central) Region.
+ - "eu-west-1"
+ - EU (Ireland) Region.
+ - "eu-west-2"
+ - EU (London) Region.
+ - "EU"
+ - EU Region.
+ - "ap-southeast-1"
+ - Asia Pacific (Singapore) Region.
+ - "ap-southeast-2"
+ - Asia Pacific (Sydney) Region.
+ - "ap-northeast-1"
+ - Asia Pacific (Tokyo) Region.
+ - "ap-northeast-2"
+ - Asia Pacific (Seoul)
+ - "ap-south-1"
+ - Asia Pacific (Mumbai)
+ - "sa-east-1"
+ - South America (Sao Paulo) Region.
+
+--s3-location-constraint
+
+Location constraint - must match endpoint when using IBM Cloud Public.
+For on-prem COS, do not make a selection from this list, hit enter
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+- Examples:
+ - "us-standard"
+ - US Cross Region Standard
+ - "us-vault"
+ - US Cross Region Vault
+ - "us-cold"
+ - US Cross Region Cold
+ - "us-flex"
+ - US Cross Region Flex
+ - "us-east-standard"
+ - US East Region Standard
+ - "us-east-vault"
+ - US East Region Vault
+ - "us-east-cold"
+ - US East Region Cold
+ - "us-east-flex"
+ - US East Region Flex
+ - "us-south-standard"
+ - US Sout hRegion Standard
+ - "us-south-vault"
+ - US South Region Vault
+ - "us-south-cold"
+ - US South Region Cold
+ - "us-south-flex"
+ - US South Region Flex
+ - "eu-standard"
+ - EU Cross Region Standard
+ - "eu-vault"
+ - EU Cross Region Vault
+ - "eu-cold"
+ - EU Cross Region Cold
+ - "eu-flex"
+ - EU Cross Region Flex
+ - "eu-gb-standard"
+ - Great Britan Standard
+ - "eu-gb-vault"
+ - Great Britan Vault
+ - "eu-gb-cold"
+ - Great Britan Cold
+ - "eu-gb-flex"
+ - Great Britan Flex
+ - "ap-standard"
+ - APAC Standard
+ - "ap-vault"
+ - APAC Vault
+ - "ap-cold"
+ - APAC Cold
+ - "ap-flex"
+ - APAC Flex
+ - "mel01-standard"
+ - Melbourne Standard
+ - "mel01-vault"
+ - Melbourne Vault
+ - "mel01-cold"
+ - Melbourne Cold
+ - "mel01-flex"
+ - Melbourne Flex
+ - "tor01-standard"
+ - Toronto Standard
+ - "tor01-vault"
+ - Toronto Vault
+ - "tor01-cold"
+ - Toronto Cold
+ - "tor01-flex"
+ - Toronto Flex
+
+--s3-location-constraint
+
+Location constraint - must be set to match the Region. Leave blank if
+not sure. Used when creating buckets only.
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Type: string
+- Default: ""
+
+--s3-acl
+
+Canned ACL used when creating buckets and/or storing objects in S3. For
+more info visit
+https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+
+- Config: acl
+- Env Var: RCLONE_S3_ACL
+- Type: string
+- Default: ""
+- Examples:
+ - "private"
+ - Owner gets FULL_CONTROL. No one else has access rights
+ (default).
+ - "public-read"
+ - Owner gets FULL_CONTROL. The AllUsers group gets READ
+ access.
+ - "public-read-write"
+ - Owner gets FULL_CONTROL. The AllUsers group gets READ and
+ WRITE access.
+ - Granting this on a bucket is generally not recommended.
+ - "authenticated-read"
+ - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets
+ READ access.
+ - "bucket-owner-read"
+ - Object owner gets FULL_CONTROL. Bucket owner gets READ
+ access.
+ - If you specify this canned ACL when creating a bucket,
+ Amazon S3 ignores it.
+ - "bucket-owner-full-control"
+ - Both the object owner and the bucket owner get FULL_CONTROL
+ over the object.
+ - If you specify this canned ACL when creating a bucket,
+ Amazon S3 ignores it.
+ - "private"
+ - Owner gets FULL_CONTROL. No one else has access rights
+ (default). This acl is available on IBM Cloud (Infra), IBM
+ Cloud (Storage), On-Premise COS
+ - "public-read"
+ - Owner gets FULL_CONTROL. The AllUsers group gets READ
+ access. This acl is available on IBM Cloud (Infra), IBM
+ Cloud (Storage), On-Premise IBM COS
+ - "public-read-write"
+ - Owner gets FULL_CONTROL. The AllUsers group gets READ and
+ WRITE access. This acl is available on IBM Cloud (Infra),
+ On-Premise IBM COS
+ - "authenticated-read"
+ - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets
+ READ access. Not supported on Buckets. This acl is available
+ on IBM Cloud (Infra) and On-Premise IBM COS
+
+--s3-server-side-encryption
+
+The server-side encryption algorithm used when storing this object in
+S3.
+
+- Config: server_side_encryption
+- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - None
+ - "AES256"
+ - AES256
+ - "aws:kms"
+ - aws:kms
+
+--s3-sse-kms-key-id
+
+If using KMS ID you must provide the ARN of Key.
+
+- Config: sse_kms_key_id
+- Env Var: RCLONE_S3_SSE_KMS_KEY_ID
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - None
+ - "arn:aws:kms:us-east-1:*"
+ - arn:aws:kms:*
+
+--s3-storage-class
+
+The storage class to use when storing new objects in S3.
+
+- Config: storage_class
+- Env Var: RCLONE_S3_STORAGE_CLASS
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Default
+ - "STANDARD"
+ - Standard storage class
+ - "REDUCED_REDUNDANCY"
+ - Reduced redundancy storage class
+ - "STANDARD_IA"
+ - Standard Infrequent Access storage class
+ - "ONEZONE_IA"
+ - One Zone Infrequent Access storage class
+
+Advanced Options
+
+Here are the advanced options specific to s3 (Amazon S3 Compliant
+Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
+
+--s3-chunk-size
+
+Chunk size to use for uploading.
Any files larger than this will be uploaded in chunks of this size. The
default is 5MB. The minimum is 5MB.
-Note that 2 chunks of this size are buffered in memory per transfer.
+Note that "--s3-upload-concurrency" chunks of this size are buffered in
+memory per transfer.
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.
---s3-force-path-style=BOOL
+- Config: chunk_size
+- Env Var: RCLONE_S3_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5M
+
+--s3-disable-checksum
+
+Don't store MD5 checksum with object metadata
+
+- Config: disable_checksum
+- Env Var: RCLONE_S3_DISABLE_CHECKSUM
+- Type: bool
+- Default: false
+
+--s3-session-token
+
+An AWS session token
+
+- Config: session_token
+- Env Var: RCLONE_S3_SESSION_TOKEN
+- Type: string
+- Default: ""
+
+--s3-upload-concurrency
+
+Concurrency for multipart uploads.
+
+This is the number of chunks of the same file that are uploaded
+concurrently.
+
+If you are uploading small numbers of large file over high speed link
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+- Config: upload_concurrency
+- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 2
+
+--s3-force-path-style
+
+If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access, if
false then rclone will use virtual path style. See the AWS S3 docs for
more info.
Some providers (eg Aliyun OSS or Netease COS) require this set to false.
-It can also be set in the config in the advanced section.
---s3-upload-concurrency
+- Config: force_path_style
+- Env Var: RCLONE_S3_FORCE_PATH_STYLE
+- Type: bool
+- Default: true
-Number of chunks of the same file that are uploaded concurrently.
-Default is 2.
+--s3-v2-auth
-If you are uploading small amount of large file over high speed link and
-these uploads do not fully utilize your bandwidth, then increasing this
-may help to speed up the transfers.
+If true use v2 authentication.
+
+If this is false (the default) then rclone will use v4 authentication.
+If it is set then rclone will use v2 authentication.
+
+Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+
+- Config: v2_auth
+- Env Var: RCLONE_S3_V2_AUTH
+- Type: bool
+- Default: false
Anonymous access to public buckets
@@ -6633,6 +7489,9 @@ versions of files, leaving the current ones intact. You can also supply
a path and only old versions under that path will be deleted, eg
rclone cleanup remote:bucket/path/to/stuff.
+Note that cleanup does not remove partially uploaded files from the
+bucket.
+
When you purge a bucket, the current and the old versions will be
deleted then the bucket will be deleted.
@@ -6702,43 +7561,10 @@ start and finish the upload) and another 2 requests for each chunk:
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
-Specific options
+Versions
-Here are the command line options specific to this cloud storage system.
-
---b2-chunk-size valuee=SIZE
-
-When uploading large files chunk the file into this size. Note that
-these chunks are buffered in memory and there might a maximum of
---transfers chunks in progress at once. 5,000,000 Bytes is the minimim
-size (default 96M).
-
---b2-upload-cutoff=SIZE
-
-Cutoff for switching to chunked upload (default 190.735 MiB == 200 MB).
-Files above this size will be uploaded in chunks of --b2-chunk-size.
-
-This value should be set no larger than 4.657GiB (== 5GB) as this is the
-largest file size that can be uploaded.
-
---b2-test-mode=FLAG
-
-This is for debugging purposes only.
-
-Setting FLAG to one of the strings below will cause b2 to return
-specific errors for debugging purposes.
-
-- fail_some_uploads
-- expire_some_account_authorization_tokens
-- force_cap_exceeded
-
-These will be set in the X-Bz-Test-Mode header which is documented in
-the b2 integrations checklist.
-
---b2-versions
-
-When set rclone will show and act on older versions of files. For
-example
+Versions can be viewd with the --b2-versions flag. When it is set rclone
+will show and act on older versions of files. For example
Listing without --b2-versions
@@ -6760,6 +7586,107 @@ the nearest millisecond appended to them.
Note that when using --b2-versions no file write operations are
permitted, so you can't upload files or delete them.
+Standard Options
+
+Here are the standard options specific to b2 (Backblaze B2).
+
+--b2-account
+
+Account ID or Application Key ID
+
+- Config: account
+- Env Var: RCLONE_B2_ACCOUNT
+- Type: string
+- Default: ""
+
+--b2-key
+
+Application Key
+
+- Config: key
+- Env Var: RCLONE_B2_KEY
+- Type: string
+- Default: ""
+
+--b2-hard-delete
+
+Permanently delete files on remote removal, otherwise hide files.
+
+- Config: hard_delete
+- Env Var: RCLONE_B2_HARD_DELETE
+- Type: bool
+- Default: false
+
+Advanced Options
+
+Here are the advanced options specific to b2 (Backblaze B2).
+
+--b2-endpoint
+
+Endpoint for the service. Leave blank normally.
+
+- Config: endpoint
+- Env Var: RCLONE_B2_ENDPOINT
+- Type: string
+- Default: ""
+
+--b2-test-mode
+
+A flag string for X-Bz-Test-Mode header for debugging.
+
+This is for debugging purposes only. Setting it to one of the strings
+below will cause b2 to return specific errors:
+
+- "fail_some_uploads"
+- "expire_some_account_authorization_tokens"
+- "force_cap_exceeded"
+
+These will be set in the "X-Bz-Test-Mode" header which is documented in
+the b2 integrations checklist.
+
+- Config: test_mode
+- Env Var: RCLONE_B2_TEST_MODE
+- Type: string
+- Default: ""
+
+--b2-versions
+
+Include old versions in directory listings. Note that when using this no
+file write operations are permitted, so you can't upload files or delete
+them.
+
+- Config: versions
+- Env Var: RCLONE_B2_VERSIONS
+- Type: bool
+- Default: false
+
+--b2-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Files above this size will be uploaded in chunks of "--b2-chunk-size".
+
+This value should be set no larger than 4.657GiB (== 5GB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_B2_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200M
+
+--b2-chunk-size
+
+Upload chunk size. Must fit in memory.
+
+When uploading large files, chunk the file into this size. Note that
+these chunks are buffered in memory and there might a maximum of
+"--transfers" chunks in progress at once. 5,000,000 Bytes is the minimim
+size.
+
+- Config: chunk_size
+- Env Var: RCLONE_B2_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 96M
+
Box
@@ -6967,18 +7894,49 @@ Deleting files
Depending on the enterprise settings for your user, the item will either
be actually deleted from Box or moved to the trash.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to box (Box).
---box-upload-cutoff=SIZE
+--box-client-id
-Cutoff for switching to chunked upload - must be >= 50MB. The default is
-50MB.
+Box App Client Id. Leave blank normally.
---box-commit-retries int
+- Config: client_id
+- Env Var: RCLONE_BOX_CLIENT_ID
+- Type: string
+- Default: ""
-Max number of times to try committing a multipart file. (default 100)
+--box-client-secret
+
+Box App Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_BOX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to box (Box).
+
+--box-upload-cutoff
+
+Cutoff for switching to multipart upload (>= 50MB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 50M
+
+--box-commit-retries
+
+Max number of times to try committing a multipart file.
+
+- Config: commit_retries
+- Env Var: RCLONE_BOX_COMMIT_RETRIES
+- Type: int
+- Default: 100
Limitations
@@ -7121,7 +8079,8 @@ A files goes through these states when using this feature:
Files are uploaded in sequence and only one file is uploaded at a time.
Uploads will be stored in a queue and be processed based on the order
they were added. The queue and the temporary storage is persistent
-across restarts and even purges of the cache.
+across restarts but can be cleared on startup with the --cache-db-purge
+flag.
Write Support
@@ -7170,6 +8129,29 @@ enabled.
Affected settings: - cache-workers: _Configured value_ during confirmed
playback or _1_ all the other times
+Certificate Validation
+
+When the Plex server is configured to only accept secure connections, it
+is possible to use .plex.direct URL's to ensure certificate validation
+succeeds. These URL's are used by Plex internally to connect to the Plex
+server securely.
+
+The format for this URL's is the following:
+
+https://ip-with-dots-replaced.server-hash.plex.direct:32400/
+
+The ip-with-dots-replaced part can be any IPv4 address, where the dots
+have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.
+
+To get the server-hash part, the easiest way is to visit
+
+https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
+
+This page will list all the available Plex servers for your account with
+at least one .plex.direct link for each. Copy one URL and replace the IP
+address with the desired address. This can be used as the plex_url
+value.
+
Known issues
Mount and --dir-cache-time
@@ -7237,6 +8219,21 @@ cloud provider which makes it think we're downloading the full file
instead of small chunks. Organizing the remotes in this order yelds
better results: CLOUD REMOTE -> CACHE -> CRYPT
+absolute remote paths
+
+cache can not differentiate between relative and absolute paths for the
+wrapped remote. Any path given in the remote config setting and on the
+command line will be passed to the wrapped remote as is, but for storing
+the chunks on disk the path will be made relative by removing any
+leading / character.
+
+This behavior is irrelevant for most backend types, but there are
+backends where a leading / changes the effective directory, e.g. in the
+sftp backend paths starting with a / are relative to the root of the SSH
+server and paths without are relative to the user home directory. As a
+result sftp:bin and sftp:/bin will share the same cache folder, even if
+they represent a different directory on the SSH server.
+
Cache and Remote Control (--rc)
Cache supports the new --rc mode in rclone and can be remote controlled
@@ -7252,73 +8249,177 @@ wrapped by crypt.
Params: - REMOTE = path to remote (REQUIRED) - WITHDATA = true/false to
delete cached data (chunks) as well _(optional, false by default)_
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to cache (Cache a remote).
---cache-db-path=PATH
+--cache-remote
-Path to where the file structure metadata (DB) is stored locally. The
-remote name is used as the DB file name.
+Remote to cache. Normally should contain a ':' and a path, eg
+"myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not
+recommended).
-DEFAULT: /cache-backend/ EXAMPLE: /.cache/cache-backend/test-cache
+- Config: remote
+- Env Var: RCLONE_CACHE_REMOTE
+- Type: string
+- Default: ""
---cache-chunk-path=PATH
+--cache-plex-url
-Path to where partial file data (chunks) is stored locally. The remote
+The URL of the Plex server
+
+- Config: plex_url
+- Env Var: RCLONE_CACHE_PLEX_URL
+- Type: string
+- Default: ""
+
+--cache-plex-username
+
+The username of the Plex user
+
+- Config: plex_username
+- Env Var: RCLONE_CACHE_PLEX_USERNAME
+- Type: string
+- Default: ""
+
+--cache-plex-password
+
+The password of the Plex user
+
+- Config: plex_password
+- Env Var: RCLONE_CACHE_PLEX_PASSWORD
+- Type: string
+- Default: ""
+
+--cache-chunk-size
+
+The size of a chunk (partial file data).
+
+Use lower numbers for slower connections. If the chunk size is changed,
+any downloaded chunks will be invalid and cache-chunk-path will need to
+be cleared or unexpected EOF errors will occur.
+
+- Config: chunk_size
+- Env Var: RCLONE_CACHE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5M
+- Examples:
+ - "1m"
+ - 1MB
+ - "5M"
+ - 5 MB
+ - "10M"
+ - 10 MB
+
+--cache-info-age
+
+How long to cache file structure information (directory listings, file
+size, times etc). If all write operations are done through the cache
+then you can safely make this value very large as the cache store will
+also be updated in real time.
+
+- Config: info_age
+- Env Var: RCLONE_CACHE_INFO_AGE
+- Type: Duration
+- Default: 6h0m0s
+- Examples:
+ - "1h"
+ - 1 hour
+ - "24h"
+ - 24 hours
+ - "48h"
+ - 48 hours
+
+--cache-chunk-total-size
+
+The total size that the chunks can take up on the local disk.
+
+If the cache exceeds this value then it will start to delete the oldest
+chunks until it goes under this value.
+
+- Config: chunk_total_size
+- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
+- Type: SizeSuffix
+- Default: 10G
+- Examples:
+ - "500M"
+ - 500 MB
+ - "1G"
+ - 1 GB
+ - "10G"
+ - 10 GB
+
+Advanced Options
+
+Here are the advanced options specific to cache (Cache a remote).
+
+--cache-plex-token
+
+The plex token for authentication - auto set normally
+
+- Config: plex_token
+- Env Var: RCLONE_CACHE_PLEX_TOKEN
+- Type: string
+- Default: ""
+
+--cache-plex-insecure
+
+Skip all certificate verifications when connecting to the Plex server
+
+- Config: plex_insecure
+- Env Var: RCLONE_CACHE_PLEX_INSECURE
+- Type: string
+- Default: ""
+
+--cache-db-path
+
+Directory to store file structure metadata DB. The remote name is used
+as the DB file name.
+
+- Config: db_path
+- Env Var: RCLONE_CACHE_DB_PATH
+- Type: string
+- Default: "/home/ncw/.cache/rclone/cache-backend"
+
+--cache-chunk-path
+
+Directory to cache chunk files.
+
+Path to where partial file data (chunks) are stored locally. The remote
name is appended to the final path.
-This config follows the --cache-db-path. If you specify a custom
-location for --cache-db-path and don't specify one for
---cache-chunk-path then --cache-chunk-path will use the same path as
---cache-db-path.
+This config follows the "--cache-db-path". If you specify a custom
+location for "--cache-db-path" and don't specify one for
+"--cache-chunk-path" then "--cache-chunk-path" will use the same path as
+"--cache-db-path".
-DEFAULT: /cache-backend/ EXAMPLE: /.cache/cache-backend/test-cache
+- Config: chunk_path
+- Env Var: RCLONE_CACHE_CHUNK_PATH
+- Type: string
+- Default: "/home/ncw/.cache/rclone/cache-backend"
--cache-db-purge
-Flag to clear all the cached data for this remote before.
+Clear all the cached data for this remote on start.
-DEFAULT: not set
+- Config: db_purge
+- Env Var: RCLONE_CACHE_DB_PURGE
+- Type: bool
+- Default: false
---cache-chunk-size=SIZE
+--cache-chunk-clean-interval
-The size of a chunk (partial file data). Use lower numbers for slower
-connections. If the chunk size is changed, any downloaded chunks will be
-invalid and cache-chunk-path will need to be cleared or unexpected EOF
-errors will occur.
+How often should the cache perform cleanups of the chunk storage. The
+default value should be ok for most people. If you find that the cache
+goes over "cache-chunk-total-size" too often then try to lower this
+value to force it to perform cleanups more often.
-DEFAULT: 5M
+- Config: chunk_clean_interval
+- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
+- Type: Duration
+- Default: 1m0s
---cache-total-chunk-size=SIZE
-
-The total size that the chunks can take up on the local disk. If cache
-exceeds this value then it will start to the delete the oldest chunks
-until it goes under this value.
-
-DEFAULT: 10G
-
---cache-chunk-clean-interval=DURATION
-
-How often should cache perform cleanups of the chunk storage. The
-default value should be ok for most people. If you find that cache goes
-over cache-total-chunk-size too often then try to lower this value to
-force it to perform cleanups more often.
-
-DEFAULT: 1m
-
---cache-info-age=DURATION
-
-How long to keep file structure information (directory listings, file
-size, mod times etc) locally.
-
-If all write operations are done through cache then you can safely make
-this value very large as the cache store will also be updated in real
-time.
-
-DEFAULT: 6h
-
---cache-read-retries=RETRIES
+--cache-read-retries
How many times to retry a read from a cache storage.
@@ -7330,9 +8431,12 @@ isn't able to provide file data anymore.
For really slow connections, increase this to a point where the stream
is able to provide data but your experience will be very stuttering.
-DEFAULT: 10
+- Config: read_retries
+- Env Var: RCLONE_CACHE_READ_RETRIES
+- Type: int
+- Default: 10
---cache-workers=WORKERS
+--cache-workers
How many workers should run in parallel to download chunks.
@@ -7344,26 +8448,39 @@ and data will be available much more faster to readers.
NOTE: If the optional Plex integration is enabled then this setting will
adapt to the type of reading performed and the value specified here will
-be used as a maximum number of workers to use. DEFAULT: 4
+be used as a maximum number of workers to use.
+
+- Config: workers
+- Env Var: RCLONE_CACHE_WORKERS
+- Type: int
+- Default: 4
--cache-chunk-no-memory
+Disable the in-memory cache for storing chunks during streaming.
+
By default, cache will keep file data during streaming in RAM as well to
provide it to readers as fast as possible.
This transient data is evicted as soon as it is read and the number of
chunks stored doesn't exceed the number of workers. However, depending
-on other settings like cache-chunk-size and cache-workers this footprint
-can increase if there are parallel streams too (multiple files being
-read at the same time).
+on other settings like "cache-chunk-size" and "cache-workers" this
+footprint can increase if there are parallel streams too (multiple files
+being read at the same time).
If the hardware permits it, use this feature to provide an overall
better performance during streaming but it can also be disabled if RAM
is not available on the local machine.
-DEFAULT: not set
+- Config: chunk_no_memory
+- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
+- Type: bool
+- Default: false
---cache-rps=NUMBER
+--cache-rps
+
+Limits the number of requests per second to the source FS (-1 to
+disable)
This setting places a hard limit on the number of requests per second
that cache will be doing to the cloud provider remote and try to respect
@@ -7379,17 +8496,27 @@ useless but it is available to set for more special cases.
NOTE: This will limit the number of requests during streams but other
API calls to the cloud provider like directory listings will still pass.
-DEFAULT: disabled
+- Config: rps
+- Env Var: RCLONE_CACHE_RPS
+- Type: int
+- Default: -1
--cache-writes
+Cache file data on writes through the FS
+
If you need to read files immediately after you upload them through
cache you can enable this flag to have their data stored in the cache
store at the same time during upload.
-DEFAULT: not set
+- Config: writes
+- Env Var: RCLONE_CACHE_WRITES
+- Type: bool
+- Default: false
---cache-tmp-upload-path=PATH
+--cache-tmp-upload-path
+
+Directory to keep temporary files until they are uploaded.
This is the path where cache will use as a temporary storage for new
files that need to be uploaded to the cloud provider.
@@ -7398,9 +8525,14 @@ Specifying a value will enable this feature. Without it, it is
completely disabled and files will be uploaded directly to the cloud
provider
-DEFAULT: empty
+- Config: tmp_upload_path
+- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
+- Type: string
+- Default: ""
---cache-tmp-wait-time=DURATION
+--cache-tmp-wait-time
+
+How long should files be stored in local cache before being uploaded
This is the duration that a file must wait in the temporary location
_cache-tmp-upload-path_ before it is selected for upload.
@@ -7408,9 +8540,14 @@ _cache-tmp-upload-path_ before it is selected for upload.
Note that only one file is uploaded at a time and it can take longer to
start the upload if a queue formed for this purpose.
-DEFAULT: 15m
+- Config: tmp_wait_time
+- Env Var: RCLONE_CACHE_TMP_WAIT_TIME
+- Type: Duration
+- Default: 15s
---cache-db-wait-time=DURATION
+--cache-db-wait-time
+
+How long to wait for the DB to be available - 0 is unlimited
Only one process can have the DB open at any one time, so rclone waits
for this duration for the DB to become available before it gives an
@@ -7418,7 +8555,10 @@ error.
If you set it to 0 then it will wait forever.
-DEFAULT: 1s
+- Config: db_wait_time
+- Env Var: RCLONE_CACHE_DB_WAIT_TIME
+- Type: Duration
+- Default: 1s
Crypt
@@ -7697,12 +8837,80 @@ Note that you should use the rclone cryptcheck command to check the
integrity of a crypted remote instead of rclone check which can't check
the checksums properly.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to crypt (Encrypt/Decrypt a
+remote).
+
+--crypt-remote
+
+Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg
+"myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not
+recommended).
+
+- Config: remote
+- Env Var: RCLONE_CRYPT_REMOTE
+- Type: string
+- Default: ""
+
+--crypt-filename-encryption
+
+How to encrypt the filenames.
+
+- Config: filename_encryption
+- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
+- Type: string
+- Default: "standard"
+- Examples:
+ - "off"
+ - Don't encrypt the file names. Adds a ".bin" extension only.
+ - "standard"
+ - Encrypt the filenames see the docs for the details.
+ - "obfuscate"
+ - Very simple filename obfuscation.
+
+--crypt-directory-name-encryption
+
+Option to either encrypt directory names or leave them intact.
+
+- Config: directory_name_encryption
+- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
+- Type: bool
+- Default: true
+- Examples:
+ - "true"
+ - Encrypt directory names.
+ - "false"
+ - Don't encrypt directory names, leave them intact.
+
+--crypt-password
+
+Password or pass phrase for encryption.
+
+- Config: password
+- Env Var: RCLONE_CRYPT_PASSWORD
+- Type: string
+- Default: ""
+
+--crypt-password2
+
+Password or pass phrase for salt. Optional but recommended. Should be
+different to the previous password.
+
+- Config: password2
+- Env Var: RCLONE_CRYPT_PASSWORD2
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to crypt (Encrypt/Decrypt a
+remote).
--crypt-show-mapping
+For all files listed show how the names encrypt.
+
If this flag is set then for each file that the remote is asked to list,
it will log (at level INFO) a line stating the decrypted file name and
the encrypted file name.
@@ -7711,6 +8919,11 @@ This is so you can work out which encrypted names are which decrypted
names just in case you need to do something with the encrypted file
names, or for debugging purposes.
+- Config: show_mapping
+- Env Var: RCLONE_CRYPT_SHOW_MAPPING
+- Type: bool
+- Default: false
+
Backing up a crypted remote
@@ -7952,20 +9165,48 @@ don't want this to happen use --size-only or --checksum flag to stop it.
Dropbox supports its own hash type which is checked for all transfers.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to dropbox (Dropbox).
---dropbox-chunk-size=SIZE
+--dropbox-client-id
-Any files larger than this will be uploaded in chunks of this size. The
-default is 48MB. The maximum is 150MB.
+Dropbox App Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_DROPBOX_CLIENT_ID
+- Type: string
+- Default: ""
+
+--dropbox-client-secret
+
+Dropbox App Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_DROPBOX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to dropbox (Dropbox).
+
+--dropbox-chunk-size
+
+Upload chunk size. (< 150M).
+
+Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed slightly
(at most 10% for 128MB in tests) at the cost of using more memory. It
can be set smaller if you are tight on memory.
+- Config: chunk_size
+- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 48M
+
Limitations
Note that Dropbox is case insensitive so you can't have a file called
@@ -8095,6 +9336,49 @@ Checksums
FTP does not support any checksums.
+Standard Options
+
+Here are the standard options specific to ftp (FTP Connection).
+
+--ftp-host
+
+FTP host to connect to
+
+- Config: host
+- Env Var: RCLONE_FTP_HOST
+- Type: string
+- Default: ""
+- Examples:
+ - "ftp.example.com"
+ - Connect to ftp.example.com
+
+--ftp-user
+
+FTP username, leave blank for current username, ncw
+
+- Config: user
+- Env Var: RCLONE_FTP_USER
+- Type: string
+- Default: ""
+
+--ftp-port
+
+FTP port, leave blank to use default (21)
+
+- Config: port
+- Env Var: RCLONE_FTP_PORT
+- Type: string
+- Default: ""
+
+--ftp-pass
+
+FTP password
+
+- Config: pass
+- Env Var: RCLONE_FTP_PASS
+- Type: string
+- Default: ""
+
Limitations
Note that since FTP isn't HTTP based the following flags don't work with
@@ -8324,6 +9608,170 @@ Google google cloud storage stores md5sums natively and rclone stores
modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns.
+Standard Options
+
+Here are the standard options specific to google cloud storage (Google
+Cloud Storage (this is not Google Drive)).
+
+--gcs-client-id
+
+Google Application Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_GCS_CLIENT_ID
+- Type: string
+- Default: ""
+
+--gcs-client-secret
+
+Google Application Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_GCS_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+--gcs-project-number
+
+Project number. Optional - needed only for list/create/delete buckets -
+see your developer console.
+
+- Config: project_number
+- Env Var: RCLONE_GCS_PROJECT_NUMBER
+- Type: string
+- Default: ""
+
+--gcs-service-account-file
+
+Service Account Credentials JSON file path Leave blank normally. Needed
+only if you want use SA instead of interactive login.
+
+- Config: service_account_file
+- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
+- Type: string
+- Default: ""
+
+--gcs-service-account-credentials
+
+Service Account Credentials JSON blob Leave blank normally. Needed only
+if you want use SA instead of interactive login.
+
+- Config: service_account_credentials
+- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
+- Type: string
+- Default: ""
+
+--gcs-object-acl
+
+Access Control List for new objects.
+
+- Config: object_acl
+- Env Var: RCLONE_GCS_OBJECT_ACL
+- Type: string
+- Default: ""
+- Examples:
+ - "authenticatedRead"
+ - Object owner gets OWNER access, and all Authenticated Users
+ get READER access.
+ - "bucketOwnerFullControl"
+ - Object owner gets OWNER access, and project team owners get
+ OWNER access.
+ - "bucketOwnerRead"
+ - Object owner gets OWNER access, and project team owners get
+ READER access.
+ - "private"
+ - Object owner gets OWNER access [default if left blank].
+ - "projectPrivate"
+ - Object owner gets OWNER access, and project team members get
+ access according to their roles.
+ - "publicRead"
+ - Object owner gets OWNER access, and all Users get READER
+ access.
+
+--gcs-bucket-acl
+
+Access Control List for new buckets.
+
+- Config: bucket_acl
+- Env Var: RCLONE_GCS_BUCKET_ACL
+- Type: string
+- Default: ""
+- Examples:
+ - "authenticatedRead"
+ - Project team owners get OWNER access, and all Authenticated
+ Users get READER access.
+ - "private"
+ - Project team owners get OWNER access [default if left
+ blank].
+ - "projectPrivate"
+ - Project team members get access according to their roles.
+ - "publicRead"
+ - Project team owners get OWNER access, and all Users get
+ READER access.
+ - "publicReadWrite"
+ - Project team owners get OWNER access, and all Users get
+ WRITER access.
+
+--gcs-location
+
+Location for the newly created buckets.
+
+- Config: location
+- Env Var: RCLONE_GCS_LOCATION
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Empty for default location (US).
+ - "asia"
+ - Multi-regional location for Asia.
+ - "eu"
+ - Multi-regional location for Europe.
+ - "us"
+ - Multi-regional location for United States.
+ - "asia-east1"
+ - Taiwan.
+ - "asia-northeast1"
+ - Tokyo.
+ - "asia-southeast1"
+ - Singapore.
+ - "australia-southeast1"
+ - Sydney.
+ - "europe-west1"
+ - Belgium.
+ - "europe-west2"
+ - London.
+ - "us-central1"
+ - Iowa.
+ - "us-east1"
+ - South Carolina.
+ - "us-east4"
+ - Northern Virginia.
+ - "us-west1"
+ - Oregon.
+
+--gcs-storage-class
+
+The storage class to use when storing objects in Google Cloud Storage.
+
+- Config: storage_class
+- Env Var: RCLONE_GCS_STORAGE_CLASS
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Default
+ - "MULTI_REGIONAL"
+ - Multi-regional storage class
+ - "REGIONAL"
+ - Regional storage class
+ - "NEARLINE"
+ - Nearline storage class
+ - "COLDLINE"
+ - Coldline storage class
+ - "DURABLE_REDUCED_AVAILABILITY"
+ - Durable reduced availability storage class
+
Google Drive
@@ -8699,63 +10147,75 @@ which will display your usage limit (quota), the usage in Google Drive,
the size of all files in the Trash and the space used by other Google
services such as Gmail. This command does not take any path arguments.
-Specific options
+Import/Export of google documents
-Here are the command line options specific to this cloud storage system.
+Google documents can be exported from and uploaded to Google Drive.
---drive-acknowledge-abuse
-
-If downloading a file returns the error
-This file has been identified as malware or spam and cannot be downloaded
-with the error code cannotDownloadAbusiveFile then supply this flag to
-rclone to indicate you acknowledge the risks of downloading the file and
-rclone will download it anyway.
-
---drive-auth-owner-only
-
-Only consider files owned by the authenticated user.
-
---drive-chunk-size=SIZE
-
-Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
-
-Making this larger will improve performance, but note that each chunk is
-buffered in memory one per transfer.
-
-Reducing this will reduce memory usage but decrease performance.
-
---drive-formats
-
-Google documents can only be exported from Google drive. When rclone
-downloads a Google doc it chooses a format to download depending upon
-this setting.
-
-By default the formats are docx,xlsx,pptx,svg which are a sensible
-default for an editable document.
+When rclone downloads a Google doc it chooses a format to download
+depending upon the --drive-export-formats setting. By default the export
+formats are docx,xlsx,pptx,svg which are a sensible default for an
+editable document.
When choosing a format, rclone runs down the list provided in order and
chooses the first file format the doc can be exported as from the list.
If the file can't be exported to a format on the formats list, then
rclone will choose a format from the default list.
-If you prefer an archive copy then you might use --drive-formats pdf, or
-if you prefer openoffice/libreoffice formats you might use
---drive-formats ods,odt,odp.
+If you prefer an archive copy then you might use
+--drive-export-formats pdf, or if you prefer openoffice/libreoffice
+formats you might use --drive-export-formats ods,odt,odp.
Note that rclone adds the extension to the google doc, so if it is
calles My Spreadsheet on google docs, it will be exported as
My Spreadsheet.xlsx or My Spreadsheet.pdf etc.
-Here are the possible extensions with their corresponding mime types.
+When importing files into Google Drive, rclone will conververt all files
+with an extension in --drive-import-formats to their associated document
+type. rclone will not convert any files by default, since the conversion
+is lossy process.
+
+The conversion must result in a file with the same extension when the
+--drive-export-formats rules are applied to the uploded document.
+
+Here are some examples for allowed and prohibited conversions.
+
+ export-formats import-formats Upload Ext Document Ext Allowed
+ ---------------- ---------------- ------------ -------------- ---------
+ odt odt odt odt Yes
+ odt docx,odt odt odt Yes
+ docx docx docx Yes
+ odt odt docx No
+ odt,docx docx,odt docx odt No
+ docx,odt docx,odt docx docx Yes
+ docx,odt docx,odt odt docx No
+
+This limitation can be disabled by specifying
+--drive-allow-import-name-change. When using this flag, rclone can
+convert multiple files types resulting in the same document type at
+once, eg with --drive-import-formats docx,odt,txt, all files having
+these extension would result in a doument represented as a docx file.
+This brings the additional risk of overwriting a document, if multiple
+files have the same stem. Many rclone operations will not handle this
+name change in any way. They assume an equal name when copying files and
+might copy the file again or delete them when the name changes.
+
+Here are the possible export extensions with their corresponding mime
+types. Most of these can also be used for importing, but there more that
+are not listed here. Some of these additional ones might only be
+available when the operating system provides the correct MIME type
+entries.
+
+This list can be changed by Google Drive at any time and might not
+represent the currently available converions.
Extension Mime Type Description
--------------------------------------------------------------------------- ------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------
csv text/csv Standard CSV format for Spreadsheets
- doc application/msword Micosoft Office Document
docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document
epub application/epub+zip E-book format
html text/html An HTML Document
jpg image/jpeg A JPEG Image File
+ json application/vnd.google-apps.script+json JSON Text Format
odp application/vnd.oasis.opendocument.presentation Openoffice Presentation
ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
@@ -8767,35 +10227,146 @@ Here are the possible extensions with their corresponding mime types.
svg image/svg+xml Scalable Vector Graphics Format
tsv text/tab-separated-values Standard TSV format for spreadsheets
txt text/plain Plain Text
- xls application/vnd.ms-excel Microsoft Office Spreadsheet
xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet
zip application/zip A ZIP file of HTML, Images CSS
---drive-alternate-export
+Google douments can also be exported as link files. These files will
+open a browser window for the Google Docs website of that dument when
+opened. The link file extension has to be specified as a
+--drive-export-formats parameter. They will match all available Google
+Documents.
-If this option is set this instructs rclone to use an alternate set of
-export URLs for drive documents. Users have reported that the official
-export URLs can't export large documents, whereas these unofficial ones
-can.
+ Extension Description OS Support
+ ----------- ----------------------------------------- ----------------
+ desktop freedesktop.org specified desktop entry Linux
+ link.html An HTML Document with a redirect All
+ url INI style link file macOS, Windows
+ webloc macOS specific XML format macOS
-See rclone issue #2243 for background, this google drive issue and this
-helpful post.
+Standard Options
---drive-impersonate user
+Here are the standard options specific to drive (Google Drive).
-When using a service account, this instructs rclone to impersonate the
-user passed in.
+--drive-client-id
---drive-keep-revision-forever
+Google Application Client Id Leave blank normally.
-Keeps new head revision of the file forever.
+- Config: client_id
+- Env Var: RCLONE_DRIVE_CLIENT_ID
+- Type: string
+- Default: ""
---drive-list-chunk int
+--drive-client-secret
-Size of listing chunk 100-1000. 0 to disable. (default 1000)
+Google Application Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_DRIVE_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+--drive-scope
+
+Scope that rclone should use when requesting access from drive.
+
+- Config: scope
+- Env Var: RCLONE_DRIVE_SCOPE
+- Type: string
+- Default: ""
+- Examples:
+ - "drive"
+ - Full access all files, excluding Application Data Folder.
+ - "drive.readonly"
+ - Read-only access to file metadata and file contents.
+ - "drive.file"
+ - Access to files created by rclone only.
+ - These are visible in the drive website.
+ - File authorization is revoked when the user deauthorizes the
+ app.
+ - "drive.appfolder"
+ - Allows read and write access to the Application Data folder.
+ - This is not visible in the drive website.
+ - "drive.metadata.readonly"
+ - Allows read-only access to file metadata but
+ - does not allow any access to read or download file content.
+
+--drive-root-folder-id
+
+ID of the root folder Leave blank normally. Fill in to access
+"Computers" folders. (see docs).
+
+- Config: root_folder_id
+- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
+- Type: string
+- Default: ""
+
+--drive-service-account-file
+
+Service Account Credentials JSON file path Leave blank normally. Needed
+only if you want use SA instead of interactive login.
+
+- Config: service_account_file
+- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to drive (Google Drive).
+
+--drive-service-account-credentials
+
+Service Account Credentials JSON blob Leave blank normally. Needed only
+if you want use SA instead of interactive login.
+
+- Config: service_account_credentials
+- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
+- Type: string
+- Default: ""
+
+--drive-team-drive
+
+ID of the Team Drive
+
+- Config: team_drive
+- Env Var: RCLONE_DRIVE_TEAM_DRIVE
+- Type: string
+- Default: ""
+
+--drive-auth-owner-only
+
+Only consider files owned by the authenticated user.
+
+- Config: auth_owner_only
+- Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
+- Type: bool
+- Default: false
+
+--drive-use-trash
+
+Send files to the trash instead of deleting permanently. Defaults to
+true, namely sending files to the trash. Use --drive-use-trash=false to
+delete files permanently instead.
+
+- Config: use_trash
+- Env Var: RCLONE_DRIVE_USE_TRASH
+- Type: bool
+- Default: true
+
+--drive-skip-gdocs
+
+Skip google documents in all listings. If given, gdocs practically
+become invisible to rclone.
+
+- Config: skip_gdocs
+- Env Var: RCLONE_DRIVE_SKIP_GDOCS
+- Type: bool
+- Default: false
--drive-shared-with-me
+Only show files that are shared with me.
+
Instructs rclone to operate on your "Shared with me" folder (where
Google Drive lets you access the files and folders others have shared
with you).
@@ -8803,30 +10374,61 @@ with you).
This works both with the "list" (lsd, lsl, etc) and the "copy" commands
(copy, sync, etc), and with all other commands too.
---drive-skip-gdocs
-
-Skip google documents in all listings. If given, gdocs practically
-become invisible to rclone.
+- Config: shared_with_me
+- Env Var: RCLONE_DRIVE_SHARED_WITH_ME
+- Type: bool
+- Default: false
--drive-trashed-only
Only show files that are in the trash. This will show trashed files in
their original directory structure.
---drive-upload-cutoff=SIZE
+- Config: trashed_only
+- Env Var: RCLONE_DRIVE_TRASHED_ONLY
+- Type: bool
+- Default: false
-File size cutoff for switching to chunked upload. Default is 8 MB.
+--drive-formats
---drive-use-trash
+Deprecated: see export_formats
-Controls whether files are sent to the trash or deleted permanently.
-Defaults to true, namely sending files to the trash. Use
---drive-use-trash=false to delete files permanently instead.
+- Config: formats
+- Env Var: RCLONE_DRIVE_FORMATS
+- Type: string
+- Default: ""
+
+--drive-export-formats
+
+Comma separated list of preferred formats for downloading Google docs.
+
+- Config: export_formats
+- Env Var: RCLONE_DRIVE_EXPORT_FORMATS
+- Type: string
+- Default: "docx,xlsx,pptx,svg"
+
+--drive-import-formats
+
+Comma separated list of preferred formats for uploading Google docs.
+
+- Config: import_formats
+- Env Var: RCLONE_DRIVE_IMPORT_FORMATS
+- Type: string
+- Default: ""
+
+--drive-allow-import-name-change
+
+Allow the filetype to change when uploading Google docs (e.g. file.doc
+to file.docx). This will confuse sync and reupload every time.
+
+- Config: allow_import_name_change
+- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
+- Type: bool
+- Default: false
--drive-use-created-date
-Use the file creation date in place of the modification date. Defaults
-to false.
+Use file created date instead of modified date.,
Useful when downloading data and you want the creation date used in
place of the last modified date.
@@ -8836,7 +10438,7 @@ WARNING: This flag may have some unexpected consequences.
When uploading to your drive all files will be overwritten unless they
haven't been modified since their creation. And the inverse will occur
while downloading. This side effect can be avoided by using the
---checksum flag.
+"--checksum" flag.
This feature was implemented to retain photos capture date as recorded
by google photos. You will first need to check the "Create a Google
@@ -8844,6 +10446,103 @@ Photos folder" option in your google drive settings. You can then copy
or move the photos locally and use the date the image was taken
(created) set as the modification date.
+- Config: use_created_date
+- Env Var: RCLONE_DRIVE_USE_CREATED_DATE
+- Type: bool
+- Default: false
+
+--drive-list-chunk
+
+Size of listing chunk 100-1000. 0 to disable.
+
+- Config: list_chunk
+- Env Var: RCLONE_DRIVE_LIST_CHUNK
+- Type: int
+- Default: 1000
+
+--drive-impersonate
+
+Impersonate this user when using a service account.
+
+- Config: impersonate
+- Env Var: RCLONE_DRIVE_IMPERSONATE
+- Type: string
+- Default: ""
+
+--drive-alternate-export
+
+Use alternate export URLs for google documents export.,
+
+If this option is set this instructs rclone to use an alternate set of
+export URLs for drive documents. Users have reported that the official
+export URLs can't export large documents, whereas these unofficial ones
+can.
+
+See rclone issue #2243 for background, this google drive issue and this
+helpful post.
+
+- Config: alternate_export
+- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
+- Type: bool
+- Default: false
+
+--drive-upload-cutoff
+
+Cutoff for switching to chunked upload
+
+- Config: upload_cutoff
+- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 8M
+
+--drive-chunk-size
+
+Upload chunk size. Must a power of 2 >= 256k.
+
+Making this larger will improve performance, but note that each chunk is
+buffered in memory one per transfer.
+
+Reducing this will reduce memory usage but decrease performance.
+
+- Config: chunk_size
+- Env Var: RCLONE_DRIVE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 8M
+
+--drive-acknowledge-abuse
+
+Set to allow files which return cannotDownloadAbusiveFile to be
+downloaded.
+
+If downloading a file returns the error "This file has been identified
+as malware or spam and cannot be downloaded" with the error code
+"cannotDownloadAbusiveFile" then supply this flag to rclone to indicate
+you acknowledge the risks of downloading the file and rclone will
+download it anyway.
+
+- Config: acknowledge_abuse
+- Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
+- Type: bool
+- Default: false
+
+--drive-keep-revision-forever
+
+Keep new head revision of each file forever.
+
+- Config: keep_revision_forever
+- Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
+- Type: bool
+- Default: false
+
+--drive-v2-download-min-size
+
+If Object's are greater, use drive v2 API to download.
+
+- Config: v2_download_min_size
+- Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
+- Type: SizeSuffix
+- Default: off
+
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited
@@ -9048,6 +10747,22 @@ without a config file:
rclone lsd --http-url https://beta.rclone.org :http:
+Standard Options
+
+Here are the standard options specific to http (http Connection).
+
+--http-url
+
+URL of http host to connect to
+
+- Config: url
+- Env Var: RCLONE_HTTP_URL
+- Type: string
+- Default: ""
+- Examples:
+ - "https://example.com"
+ - Connect to example.com
+
Hubic
@@ -9170,6 +10885,44 @@ amongst others) for storing the modification time for an object.
Note that Hubic wraps the Swift backend, so most of the properties of
are the same.
+Standard Options
+
+Here are the standard options specific to hubic (Hubic).
+
+--hubic-client-id
+
+Hubic Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_HUBIC_CLIENT_ID
+- Type: string
+- Default: ""
+
+--hubic-client-secret
+
+Hubic Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_HUBIC_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to hubic (Hubic).
+
+--hubic-chunk-size
+
+Above this size files will be chunked into a _segments container.
+
+Above this size files will be chunked into a _segments container. The
+default for this is 5GB which is its maximum value.
+
+- Config: chunk_size
+- Env Var: RCLONE_HUBIC_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5G
+
Limitations
This uses the normal OpenStack Swift mechanism to refresh the Swift API
@@ -9256,6 +11009,16 @@ To copy a local directory to an Jottacloud directory called backup
rclone copy /home/source remote:backup
+--fast-list
+
+This remote supports --fast-list which allows you to use fewer
+transactions in exchange for more memory. See the rclone docs for more
+details.
+
+Note that the implementation in Jottacloud always uses only a single API
+request to get the entire list, so for large folders this could lead to
+long wait time before the first results are shown.
+
Modified time and hashes
Jottacloud allows modification times to be set on objects accurate to 1
@@ -9272,9 +11035,11 @@ before it is uploaded. Small files will be cached in memory - see the
Deleting files
-Any files you delete with rclone will end up in the trash. Due to a lack
-of API documentation emptying the trash is currently only possible via
-the Jottacloud website.
+By default rclone will send all files to the trash when deleting files.
+Due to a lack of API documentation emptying the trash is currently only
+possible via the Jottacloud website. If deleting permanently is required
+then use the --jottacloud-hard-delete flag, or set the equivalent
+environment variable.
Versions
@@ -9283,6 +11048,82 @@ of a file it creates a new version of it. Currently rclone only supports
retrieving the current version but older versions can be accessed via
the Jottacloud Website.
+Quota information
+
+To view your current quota you can use the rclone about remote: command
+which will display your usage limit (unless it is unlimited) and the
+current usage.
+
+Standard Options
+
+Here are the standard options specific to jottacloud (JottaCloud).
+
+--jottacloud-user
+
+User Name
+
+- Config: user
+- Env Var: RCLONE_JOTTACLOUD_USER
+- Type: string
+- Default: ""
+
+--jottacloud-pass
+
+Password.
+
+- Config: pass
+- Env Var: RCLONE_JOTTACLOUD_PASS
+- Type: string
+- Default: ""
+
+--jottacloud-mountpoint
+
+The mountpoint to use.
+
+- Config: mountpoint
+- Env Var: RCLONE_JOTTACLOUD_MOUNTPOINT
+- Type: string
+- Default: ""
+- Examples:
+ - "Sync"
+ - Will be synced by the official client.
+ - "Archive"
+ - Archive
+
+Advanced Options
+
+Here are the advanced options specific to jottacloud (JottaCloud).
+
+--jottacloud-md5-memory-limit
+
+Files bigger than this will be cached on disk to calculate the MD5 if
+required.
+
+- Config: md5_memory_limit
+- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
+- Type: SizeSuffix
+- Default: 10M
+
+--jottacloud-hard-delete
+
+Delete files permanently rather than putting them into the trash.
+
+- Config: hard_delete
+- Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
+- Type: bool
+- Default: false
+
+--jottacloud-unlink
+
+Remove existing public link to file/folder with link command rather than
+creating. Default is false, meaning link command will create or retrieve
+public link.
+
+- Config: unlink
+- Env Var: RCLONE_JOTTACLOUD_UNLINK
+- Type: bool
+- Default: false
+
Limitations
Note that Jottacloud is case insensitive so you can't have a file called
@@ -9295,15 +11136,6 @@ instead.
Jottacloud only supports filenames up to 255 characters in length.
-Specific options
-
-Here are the command line options specific to this cloud storage system.
-
---jottacloud-md5-memory-limit SizeSuffix
-
-Files bigger than this will be cached on disk to calculate the MD5 if
-required. (default 10M)
-
Troubleshooting
Jottacloud exhibits some inconsistent behaviours regarding deleted files
@@ -9400,21 +11232,56 @@ messages in the log about duplicates.
Use rclone dedupe to fix duplicated files.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to mega (Mega).
+
+--mega-user
+
+User name
+
+- Config: user
+- Env Var: RCLONE_MEGA_USER
+- Type: string
+- Default: ""
+
+--mega-pass
+
+Password.
+
+- Config: pass
+- Env Var: RCLONE_MEGA_PASS
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to mega (Mega).
--mega-debug
+Output more debug from Mega.
+
If this flag is set (along with -vv) it will print further debugging
information from the mega backend.
+- Config: debug
+- Env Var: RCLONE_MEGA_DEBUG
+- Type: bool
+- Default: false
+
--mega-hard-delete
+Delete files permanently rather than putting them into the trash.
+
Normally the mega backend will put all deletions into the trash rather
-than permanently deleting them. If you specify this flag (or set it in
-the advanced config) then rclone will permanently delete objects
-instead.
+than permanently deleting them. If you specify this then rclone will
+permanently delete objects instead.
+
+- Config: hard_delete
+- Env Var: RCLONE_MEGA_HARD_DELETE
+- Type: bool
+- Default: false
Limitations
@@ -9592,31 +11459,108 @@ upload which means that there is a limit of 9.5TB of multipart uploads
in progress as Azure won't allow more than that amount of uncommitted
blocks.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to azureblob (Microsoft Azure
+Blob Storage).
---azureblob-upload-cutoff=SIZE
+--azureblob-account
-Cutoff for switching to chunked upload - must be <= 256MB. The default
-is 256MB.
+Storage Account Name (leave blank to use connection string or SAS URL)
---azureblob-chunk-size=SIZE
+- Config: account
+- Env Var: RCLONE_AZUREBLOB_ACCOUNT
+- Type: string
+- Default: ""
-Upload chunk size. Default 4MB. Note that this is stored in memory and
-there may be up to --transfers chunks stored at once in memory. This can
-be at most 100MB.
+--azureblob-key
---azureblob-access-tier=Hot/Cool/Archive
+Storage Account Key (leave blank to use connection string or SAS URL)
-Azure storage supports blob tiering, you can configure tier in advanced
-settings or supply flag while performing data transfer operations. If
-there is no access tier specified, rclone doesn't apply any tier. rclone
-performs Set Tier operation on blobs while uploading, if objects are not
-modified, specifying access tier to new one will have no effect. If
-blobs are in archive tier at remote, trying to perform data transfer
-operations from remote will not be allowed. User should first restore by
-tiering blob to Hot or Cool.
+- Config: key
+- Env Var: RCLONE_AZUREBLOB_KEY
+- Type: string
+- Default: ""
+
+--azureblob-sas-url
+
+SAS URL for container level access only (leave blank if using
+account/key or connection string)
+
+- Config: sas_url
+- Env Var: RCLONE_AZUREBLOB_SAS_URL
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to azureblob (Microsoft Azure
+Blob Storage).
+
+--azureblob-endpoint
+
+Endpoint for the service Leave blank normally.
+
+- Config: endpoint
+- Env Var: RCLONE_AZUREBLOB_ENDPOINT
+- Type: string
+- Default: ""
+
+--azureblob-upload-cutoff
+
+Cutoff for switching to chunked upload (<= 256MB).
+
+- Config: upload_cutoff
+- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 256M
+
+--azureblob-chunk-size
+
+Upload chunk size (<= 100MB).
+
+Note that this is stored in memory and there may be up to "--transfers"
+chunks stored at once in memory.
+
+- Config: chunk_size
+- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 4M
+
+--azureblob-list-chunk
+
+Size of blob list.
+
+This sets the number of blobs requested in each listing chunk. Default
+is the maximum, 5000. "List blobs" requests are permitted 2 minutes per
+megabyte to complete. If an operation is taking longer than 2 minutes
+per megabyte on average, it will time out ( source ). This can be used
+to limit the number of blobs items to return, to avoid the time out.
+
+- Config: list_chunk
+- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
+- Type: int
+- Default: 5000
+
+--azureblob-access-tier
+
+Access tier of blob: hot, cool or archive.
+
+Archived blobs can be restored by setting access tier to hot or cool.
+Leave blank if you intend to use default access tier, which is set at
+account level
+
+If there is no "access tier" specified, rclone doesn't apply any tier.
+rclone performs "Set Tier" operation on blobs while uploading, if
+objects are not modified, specifying "access tier" to new one will have
+no effect. If blobs are in "archive tier" at remote, trying to perform
+data transfer operations from remote will not be allowed. User should
+first restore by tiering blob to "Hot" or "Cool".
+
+- Config: access_tier
+- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
+- Type: string
+- Default: ""
Limitations
@@ -9640,51 +11584,36 @@ Here is an example of how to make a remote called remote. First run:
This will guide you through an interactive setup process:
- No remotes found - make a new one
+ e) Edit existing remote
n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
s) Set configuration password
- n/s> n
+ q) Quit config
+ e/n/d/r/c/s/q> n
name> remote
Type of storage to configure.
+ Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / Microsoft OneDrive
+ ...
+ 17 / Microsoft OneDrive
\ "onedrive"
- 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 12 / SSH/SFTP Connection
- \ "sftp"
- 13 / Yandex Disk
- \ "yandex"
- Storage> 10
- Microsoft App Client Id - leave blank normally.
+ ...
+ Storage> 17
+ Microsoft App Client Id
+ Leave blank normally.
+ Enter a string value. Press Enter for the default ("").
client_id>
- Microsoft App Client Secret - leave blank normally.
+ Microsoft App Client Secret
+ Leave blank normally.
+ Enter a string value. Press Enter for the default ("").
client_secret>
+ Edit advanced config? (y/n)
+ y) Yes
+ n) No
+ y/n> n
Remote config
- Choose OneDrive account type?
- * Say b for a OneDrive business account
- * Say p for a personal OneDrive account
- b) Business
- p) Personal
- b/p> p
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
@@ -9695,11 +11624,32 @@ This will guide you through an interactive setup process:
Log in and authorize rclone for access
Waiting for code...
Got code
+ Choose a number from below, or type in an existing value
+ 1 / OneDrive Personal or Business
+ \ "onedrive"
+ 2 / Sharepoint site
+ \ "sharepoint"
+ 3 / Type in driveID
+ \ "driveid"
+ 4 / Type in SiteID
+ \ "siteid"
+ 5 / Search a Sharepoint site
+ \ "search"
+ Your choice> 1
+ Found 1 drives, please select the one you want to use:
+ 0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
+ Chose drive to use:> 0
+ Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
+ Is that okay?
+ y) Yes
+ n) No
+ y/n> y
--------------------
[remote]
- client_id =
- client_secret =
- token = {"access_token":"XXXXXX"}
+ type = onedrive
+ token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
+ drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
+ drive_type = business
--------------------
y) Yes this is OK
e) Edit this remote
@@ -9729,22 +11679,29 @@ To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
-OneDrive for Business
+Getting your own Client ID and Key
-There is additional support for OneDrive for Business. Select "b" when
-ask
+rclone uses a pair of Client ID and Key shared by all rclone users when
+performing requests by default. If you are having problems with them
+(E.g., seeing a lot of throttling), you can get your own Client ID and
+Key by following the steps below:
- Choose OneDrive account type?
- * Say b for a OneDrive business account
- * Say p for a personal OneDrive account
- b) Business
- p) Personal
- b/p>
+1. Open https://apps.dev.microsoft.com/#/appList, then click Add an app
+ (Choose Converged applications if applicable)
+2. Enter a name for your app, and click continue. Copy and keep the
+ Application Id under the app name for later use.
+3. Under section Application Secrets, click Generate New Password. Copy
+ and keep that password for later use.
+4. Under section Platforms, click Add platform, then Web. Enter
+ http://localhost:53682/ in Redirect URLs.
+5. Under section Microsoft Graph Permissions, Add these
+ delegated permissions: Files.Read, Files.ReadWrite, Files.Read.All,
+ Files.ReadWrite.All, offline_access, User.Read.
+6. Scroll to the bottom and click Save.
-After that rclone requires an authentication of your account. The
-application will first authenticate your account, then query the
-OneDrive resource URL and do a second (silent) authentication for this
-resource URL.
+Now the application is complete. Run rclone config to create or edit a
+OneDrive remote. Supply the app ID and password as Client ID and Secret,
+respectively. rclone will walk you through the remaining steps.
Modified time and hashes
@@ -9764,14 +11721,76 @@ doesn't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Microsoft's apps or via
the OneDrive website.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to onedrive (Microsoft OneDrive).
---onedrive-chunk-size=SIZE
+--onedrive-client-id
-Above this size files will be chunked - must be multiple of 320k. The
-default is 10MB. Note that the chunks will be buffered into memory.
+Microsoft App Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_ONEDRIVE_CLIENT_ID
+- Type: string
+- Default: ""
+
+--onedrive-client-secret
+
+Microsoft App Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
+- Type: string
+- Default: ""
+
+Advanced Options
+
+Here are the advanced options specific to onedrive (Microsoft OneDrive).
+
+--onedrive-chunk-size
+
+Chunk size to upload files with - must be multiple of 320k.
+
+Above this size files will be chunked - must be multiple of 320k. Note
+that the chunks will be buffered into memory.
+
+- Config: chunk_size
+- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 10M
+
+--onedrive-drive-id
+
+The ID of the drive to use
+
+- Config: drive_id
+- Env Var: RCLONE_ONEDRIVE_DRIVE_ID
+- Type: string
+- Default: ""
+
+--onedrive-drive-type
+
+The type of the drive ( personal | business | documentLibrary )
+
+- Config: drive_type
+- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
+- Type: string
+- Default: ""
+
+--onedrive-expose-onenote-files
+
+Set to make OneNote files show up in directory listings.
+
+By default rclone will hide OneNote files in directory listings because
+operations like "Open" and "Update" won't work on them. But this
+behaviour may also prevent you from deleting them. If you want to delete
+OneNote files or otherwise want them to show up in directory listing,
+set this option.
+
+- Config: expose_onenote_files
+- Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
+- Type: bool
+- Default: false
Limitations
@@ -9913,13 +11932,27 @@ OpenDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
-Deleting files
+Standard Options
-Any files you delete with rclone will end up in the trash. Amazon don't
-provide an API to permanently delete files, nor to empty the trash, so
-you will have to do that with one of Amazon's apps or via the OpenDrive
-website. As of November 17, 2016, files are automatically deleted by
-Amazon from the trash after 30 days.
+Here are the standard options specific to opendrive (OpenDrive).
+
+--opendrive-username
+
+Username
+
+- Config: username
+- Env Var: RCLONE_OPENDRIVE_USERNAME
+- Type: string
+- Default: ""
+
+--opendrive-password
+
+Password.
+
+- Config: password
+- Env Var: RCLONE_OPENDRIVE_PASSWORD
+- Type: string
+- Default: ""
Limitations
@@ -10078,6 +12111,90 @@ In order of precedence:
- Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY
- Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY
+Standard Options
+
+Here are the standard options specific to qingstor (QingCloud Object
+Storage).
+
+--qingstor-env-auth
+
+Get QingStor credentials from runtime. Only applies if access_key_id and
+secret_access_key is blank.
+
+- Config: env_auth
+- Env Var: RCLONE_QINGSTOR_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter QingStor credentials in the next step
+ - "true"
+ - Get QingStor credentials from the environment (env vars or
+ IAM)
+
+--qingstor-access-key-id
+
+QingStor Access Key ID Leave blank for anonymous access or runtime
+credentials.
+
+- Config: access_key_id
+- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
+- Type: string
+- Default: ""
+
+--qingstor-secret-access-key
+
+QingStor Secret Access Key (password) Leave blank for anonymous access
+or runtime credentials.
+
+- Config: secret_access_key
+- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
+- Type: string
+- Default: ""
+
+--qingstor-endpoint
+
+Enter a endpoint URL to connection QingStor API. Leave blank will use
+the default value "https://qingstor.com:443"
+
+- Config: endpoint
+- Env Var: RCLONE_QINGSTOR_ENDPOINT
+- Type: string
+- Default: ""
+
+--qingstor-zone
+
+Zone to connect to. Default is "pek3a".
+
+- Config: zone
+- Env Var: RCLONE_QINGSTOR_ZONE
+- Type: string
+- Default: ""
+- Examples:
+ - "pek3a"
+ - The Beijing (China) Three Zone
+ - Needs location constraint pek3a.
+ - "sh1a"
+ - The Shanghai (China) First Zone
+ - Needs location constraint sh1a.
+ - "gd2a"
+ - The Guangdong (China) Second Zone
+ - Needs location constraint gd2a.
+
+Advanced Options
+
+Here are the advanced options specific to qingstor (QingCloud Object
+Storage).
+
+--qingstor-connection-retries
+
+Number of connnection retries.
+
+- Config: connection_retries
+- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
+- Type: int
+- Default: 3
+
Swift
@@ -10328,21 +12445,205 @@ with --use-server-modtime, you can avoid the extra API call and simply
upload files whose local modtime is newer than the time it was last
uploaded.
-Specific options
+Standard Options
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to swift (Openstack Swift
+(Rackspace Cloud Files, Memset Memstore, OVH)).
---swift-storage-policy=STRING
+--swift-env-auth
-Apply the specified storage policy when creating a new container. The
-policy cannot be changed afterwards. The allowed configuration values
-and their meaning depend on your Swift storage provider.
+Get swift credentials from environment variables in standard OpenStack
+form.
---swift-chunk-size=SIZE
+- Config: env_auth
+- Env Var: RCLONE_SWIFT_ENV_AUTH
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Enter swift credentials in the next step
+ - "true"
+ - Get swift credentials from environment vars. Leave other
+ fields blank if using this.
+
+--swift-user
+
+User name to log in (OS_USERNAME).
+
+- Config: user
+- Env Var: RCLONE_SWIFT_USER
+- Type: string
+- Default: ""
+
+--swift-key
+
+API key or password (OS_PASSWORD).
+
+- Config: key
+- Env Var: RCLONE_SWIFT_KEY
+- Type: string
+- Default: ""
+
+--swift-auth
+
+Authentication URL for server (OS_AUTH_URL).
+
+- Config: auth
+- Env Var: RCLONE_SWIFT_AUTH
+- Type: string
+- Default: ""
+- Examples:
+ - "https://auth.api.rackspacecloud.com/v1.0"
+ - Rackspace US
+ - "https://lon.auth.api.rackspacecloud.com/v1.0"
+ - Rackspace UK
+ - "https://identity.api.rackspacecloud.com/v2.0"
+ - Rackspace v2
+ - "https://auth.storage.memset.com/v1.0"
+ - Memset Memstore UK
+ - "https://auth.storage.memset.com/v2.0"
+ - Memset Memstore UK v2
+ - "https://auth.cloud.ovh.net/v2.0"
+ - OVH
+
+--swift-user-id
+
+User ID to log in - optional - most swift systems use user and leave
+this blank (v3 auth) (OS_USER_ID).
+
+- Config: user_id
+- Env Var: RCLONE_SWIFT_USER_ID
+- Type: string
+- Default: ""
+
+--swift-domain
+
+User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+
+- Config: domain
+- Env Var: RCLONE_SWIFT_DOMAIN
+- Type: string
+- Default: ""
+
+--swift-tenant
+
+Tenant name - optional for v1 auth, this or tenant_id required otherwise
+(OS_TENANT_NAME or OS_PROJECT_NAME)
+
+- Config: tenant
+- Env Var: RCLONE_SWIFT_TENANT
+- Type: string
+- Default: ""
+
+--swift-tenant-id
+
+Tenant ID - optional for v1 auth, this or tenant required otherwise
+(OS_TENANT_ID)
+
+- Config: tenant_id
+- Env Var: RCLONE_SWIFT_TENANT_ID
+- Type: string
+- Default: ""
+
+--swift-tenant-domain
+
+Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+
+- Config: tenant_domain
+- Env Var: RCLONE_SWIFT_TENANT_DOMAIN
+- Type: string
+- Default: ""
+
+--swift-region
+
+Region name - optional (OS_REGION_NAME)
+
+- Config: region
+- Env Var: RCLONE_SWIFT_REGION
+- Type: string
+- Default: ""
+
+--swift-storage-url
+
+Storage URL - optional (OS_STORAGE_URL)
+
+- Config: storage_url
+- Env Var: RCLONE_SWIFT_STORAGE_URL
+- Type: string
+- Default: ""
+
+--swift-auth-token
+
+Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+
+- Config: auth_token
+- Env Var: RCLONE_SWIFT_AUTH_TOKEN
+- Type: string
+- Default: ""
+
+--swift-auth-version
+
+AuthVersion - optional - set to (1,2,3) if your auth URL has no version
+(ST_AUTH_VERSION)
+
+- Config: auth_version
+- Env Var: RCLONE_SWIFT_AUTH_VERSION
+- Type: int
+- Default: 0
+
+--swift-endpoint-type
+
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
+
+- Config: endpoint_type
+- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
+- Type: string
+- Default: "public"
+- Examples:
+ - "public"
+ - Public (default, choose this if not sure)
+ - "internal"
+ - Internal (use internal service net)
+ - "admin"
+ - Admin
+
+--swift-storage-policy
+
+The storage policy to use when creating a new container
+
+This applies the specified storage policy when creating a new container.
+The policy cannot be changed afterwards. The allowed configuration
+values and their meaning depend on your Swift storage provider.
+
+- Config: storage_policy
+- Env Var: RCLONE_SWIFT_STORAGE_POLICY
+- Type: string
+- Default: ""
+- Examples:
+ - ""
+ - Default
+ - "pcs"
+ - OVH Public Cloud Storage
+ - "pca"
+ - OVH Public Cloud Archive
+
+Advanced Options
+
+Here are the advanced options specific to swift (Openstack Swift
+(Rackspace Cloud Files, Memset Memstore, OVH)).
+
+--swift-chunk-size
+
+Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.
+- Config: chunk_size
+- Env Var: RCLONE_SWIFT_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5G
+
Modified time
The modified time is stored as metadata on the object as
@@ -10504,6 +12805,28 @@ Deleted files will be moved to the trash. Your subscription level will
determine how long items stay in the trash. rclone cleanup can be used
to empty the trash.
+Standard Options
+
+Here are the standard options specific to pcloud (Pcloud).
+
+--pcloud-client-id
+
+Pcloud App Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_PCLOUD_CLIENT_ID
+- Type: string
+- Default: ""
+
+--pcloud-client-secret
+
+Pcloud App Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_PCLOUD_CLIENT_SECRET
+- Type: string
+- Default: ""
+
SFTP
@@ -10644,29 +12967,6 @@ And then at the end of the session
These commands can be used in scripts of course.
-Specific options
-
-Here are the command line options specific to this remote.
-
---sftp-ask-password
-
-Ask for the SFTP password if needed when no password has been
-configured.
-
---ssh-path-override
-
-Override path used by SSH connection. Allows checksum calculation when
-SFTP and SSH paths are different. This issue affects among others
-Synology NAS boxes.
-
-Shared folders can be found in directories representing volumes
-
- rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
-
-Home directory can be found in a shared folder called homes
-
- rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
-
Modified time
Modified times are stored on the server to 1 second precision.
@@ -10679,6 +12979,127 @@ mod_sftp). If you are using one of these servers, you can set the option
set_modtime = false in your RClone backend configuration to disable this
behaviour.
+Standard Options
+
+Here are the standard options specific to sftp (SSH/SFTP Connection).
+
+--sftp-host
+
+SSH host to connect to
+
+- Config: host
+- Env Var: RCLONE_SFTP_HOST
+- Type: string
+- Default: ""
+- Examples:
+ - "example.com"
+ - Connect to example.com
+
+--sftp-user
+
+SSH username, leave blank for current username, ncw
+
+- Config: user
+- Env Var: RCLONE_SFTP_USER
+- Type: string
+- Default: ""
+
+--sftp-port
+
+SSH port, leave blank to use default (22)
+
+- Config: port
+- Env Var: RCLONE_SFTP_PORT
+- Type: string
+- Default: ""
+
+--sftp-pass
+
+SSH password, leave blank to use ssh-agent.
+
+- Config: pass
+- Env Var: RCLONE_SFTP_PASS
+- Type: string
+- Default: ""
+
+--sftp-key-file
+
+Path to unencrypted PEM-encoded private key file, leave blank to use
+ssh-agent.
+
+- Config: key_file
+- Env Var: RCLONE_SFTP_KEY_FILE
+- Type: string
+- Default: ""
+
+--sftp-use-insecure-cipher
+
+Enable the use of the aes128-cbc cipher. This cipher is insecure and may
+allow plaintext data to be recovered by an attacker.
+
+- Config: use_insecure_cipher
+- Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
+- Type: bool
+- Default: false
+- Examples:
+ - "false"
+ - Use default Cipher list.
+ - "true"
+ - Enables the use of the aes128-cbc cipher.
+
+--sftp-disable-hashcheck
+
+Disable the execution of SSH commands to determine if remote file
+hashing is available. Leave blank or set to false to enable hashing
+(recommended), set to true to disable hashing.
+
+- Config: disable_hashcheck
+- Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
+- Type: bool
+- Default: false
+
+Advanced Options
+
+Here are the advanced options specific to sftp (SSH/SFTP Connection).
+
+--sftp-ask-password
+
+Allow asking for SFTP password when needed.
+
+- Config: ask_password
+- Env Var: RCLONE_SFTP_ASK_PASSWORD
+- Type: bool
+- Default: false
+
+--sftp-path-override
+
+Override path used by SSH connection.
+
+This allows checksum calculation when SFTP and SSH paths are different.
+This issue affects among others Synology NAS boxes.
+
+Shared folders can be found in directories representing volumes
+
+ rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
+
+Home directory can be found in a shared folder called "home"
+
+ rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
+
+- Config: path_override
+- Env Var: RCLONE_SFTP_PATH_OVERRIDE
+- Type: string
+- Default: ""
+
+--sftp-set-modtime
+
+Set the modified time on the remote if set.
+
+- Config: set_modtime
+- Env Var: RCLONE_SFTP_SET_MODTIME
+- Type: bool
+- Default: true
+
Limitations
SFTP supports checksums if the same login has shell access and md5sum or
@@ -10709,6 +13130,165 @@ with it: --dump-headers, --dump-bodies, --dump-auth
Note that --timeout isn't supported (but --contimeout is).
+Union
+
+The union remote provides a unification similar to UnionFS using other
+remotes.
+
+Paths may be as deep as required or a local path, eg
+remote:directory/subdirectory or /directory/subdirectory.
+
+During the initial setup with rclone config you will specify the target
+remotes as a space separated list. The target remotes can either be a
+local paths or other remotes.
+
+The order of the remotes is important as it defines which remotes take
+precedence over others if there are files with the same name in the same
+logical path. The last remote is the topmost remote and replaces files
+with the same name from previous remotes.
+
+Only the last remote is used to write to and delete from, all other
+remotes are read-only.
+
+Subfolders can be used in target remote. Asume a union remote named
+backup with the remotes mydrive:private/backup mydrive2:/backup.
+Invoking rclone mkdir backup:desktop is exactly the same as invoking
+rclone mkdir mydrive2:/backup/desktop.
+
+There will be no special handling of paths containing .. segments.
+Invoking rclone mkdir backup:../desktop is exactly the same as invoking
+rclone mkdir mydrive2:/backup/../desktop.
+
+Here is an example of how to make a union called remote for local
+folders. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found - make a new one
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> remote
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
+ \ "amazon cloud drive"
+ 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
+ \ "s3"
+ 4 / Backblaze B2
+ \ "b2"
+ 5 / Box
+ \ "box"
+ 6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes
+ \ "union"
+ 7 / Cache a remote
+ \ "cache"
+ 8 / Dropbox
+ \ "dropbox"
+ 9 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 10 / FTP Connection
+ \ "ftp"
+ 11 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 12 / Google Drive
+ \ "drive"
+ 13 / Hubic
+ \ "hubic"
+ 14 / JottaCloud
+ \ "jottacloud"
+ 15 / Local Disk
+ \ "local"
+ 16 / Mega
+ \ "mega"
+ 17 / Microsoft Azure Blob Storage
+ \ "azureblob"
+ 18 / Microsoft OneDrive
+ \ "onedrive"
+ 19 / OpenDrive
+ \ "opendrive"
+ 20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+ 21 / Pcloud
+ \ "pcloud"
+ 22 / QingCloud Object Storage
+ \ "qingstor"
+ 23 / SSH/SFTP Connection
+ \ "sftp"
+ 24 / Webdav
+ \ "webdav"
+ 25 / Yandex Disk
+ \ "yandex"
+ 26 / http Connection
+ \ "http"
+ Storage> union
+ List of space separated remotes.
+ Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc.
+ The last remote is used to write to.
+ Enter a string value. Press Enter for the default ("").
+ remotes>
+ Remote config
+ --------------------
+ [remote]
+ type = union
+ remotes = C:\dir1 C:\dir2 C:\dir3
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+ Current remotes:
+
+ Name Type
+ ==== ====
+ remote union
+
+ e) Edit existing remote
+ n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ e/n/d/r/c/s/q> q
+
+Once configured you can then use rclone like this,
+
+List directories in top level in C:\dir1, C:\dir2 and C:\dir3
+
+ rclone lsd remote:
+
+List all the files in C:\dir1, C:\dir2 and C:\dir3
+
+ rclone ls remote:
+
+Copy another local directory to the union directory called source, which
+will be placed into C:\dir3
+
+ rclone copy C:\source remote:source
+
+Standard Options
+
+Here are the standard options specific to union (A stackable unification
+remote, which can appear to merge the contents of several remotes).
+
+--union-remotes
+
+List of space separated remotes. Can be 'remotea:test/dir remoteb:',
+'"remotea:test/space dir" remoteb:', etc. The last remote is used to
+write to.
+
+- Config: remotes
+- Env Var: RCLONE_UNION_REMOTES
+- Type: string
+- Default: ""
+
+
WebDAV
Paths are specified as remote:path
@@ -10803,6 +13383,67 @@ Owncloud or Nextcloud rclone will support modified times.
Hashes are not supported.
+Standard Options
+
+Here are the standard options specific to webdav (Webdav).
+
+--webdav-url
+
+URL of http host to connect to
+
+- Config: url
+- Env Var: RCLONE_WEBDAV_URL
+- Type: string
+- Default: ""
+- Examples:
+ - "https://example.com"
+ - Connect to example.com
+
+--webdav-vendor
+
+Name of the Webdav site/service/software you are using
+
+- Config: vendor
+- Env Var: RCLONE_WEBDAV_VENDOR
+- Type: string
+- Default: ""
+- Examples:
+ - "nextcloud"
+ - Nextcloud
+ - "owncloud"
+ - Owncloud
+ - "sharepoint"
+ - Sharepoint
+ - "other"
+ - Other site/service or software
+
+--webdav-user
+
+User name
+
+- Config: user
+- Env Var: RCLONE_WEBDAV_USER
+- Type: string
+- Default: ""
+
+--webdav-pass
+
+Password.
+
+- Config: pass
+- Env Var: RCLONE_WEBDAV_PASS
+- Type: string
+- Default: ""
+
+--webdav-bearer-token
+
+Bearer token instead of user/pass (eg a Macaroon)
+
+- Config: bearer_token
+- Env Var: RCLONE_WEBDAV_BEARER_TOKEN
+- Type: string
+- Default: ""
+
Provider notes
@@ -11024,6 +13665,28 @@ If you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This
command does not take any path arguments.
+Standard Options
+
+Here are the standard options specific to yandex (Yandex Disk).
+
+--yandex-client-id
+
+Yandex Client Id Leave blank normally.
+
+- Config: client_id
+- Env Var: RCLONE_YANDEX_CLIENT_ID
+- Type: string
+- Default: ""
+
+--yandex-client-secret
+
+Yandex Client Secret Leave blank normally.
+
+- Config: client_secret
+- Env Var: RCLONE_YANDEX_CLIENT_SECRET
+- Type: string
+- Default: ""
+
Local Filesystem
@@ -11091,17 +13754,13 @@ This will use UNC paths on c:\src but not on z:\dst. Of course this will
cause problems if the absolute path length of a file exceeds 258
characters on z, so only use this option if you have to.
-Specific options
-
-Here are the command line options specific to local storage
-
---copy-links, -L
+Symlinks / Junction points
Normally rclone will ignore symlinks or junction points (which behave
like symlinks under Windows).
-If you supply this flag then rclone will follow the symlink and copy the
-pointed to file or directory.
+If you supply --copy-links or -L then rclone will follow the symlink and
+copy the pointed to file or directory.
This flag applies to all commands.
@@ -11130,28 +13789,13 @@ and
6 b/two
6 b/one
---local-no-check-updated
+Restricting filesystems with --one-file-system
-Don't check to see if the files change during upload.
+Normally rclone will recurse through filesystems as mounted.
-Normally rclone checks the size and modification time of files as they
-are being uploaded and aborts with a message which starts
-can't copy - source file is being updated if the file changes during
-upload.
-
-However on some file systems this modification time check may fail (eg
-Glusterfs #2206) so this check can be disabled with this flag.
-
---local-no-unicode-normalization
-
-This flag is deprecated now. Rclone no longer normalizes unicode file
-names, but it compares them with unicode normalization in the sync
-routine instead.
-
---one-file-system, -x
-
-This tells rclone to stay in the filesystem specified by the root and
-not to recurse into different file systems.
+However if you set --one-file-system or -x this tells rclone to stay in
+the filesystem specified by the root and not to recurse into different
+file systems.
For example if you have a directory hierarchy like this
@@ -11180,19 +13824,235 @@ NB Rclone (like most unix tools such as du, rsync and tar) treats a bind
mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where
-it isn't supported (eg Windows) it will not appear as an valid flag.
+it isn't supported (eg Windows) it will be ignored.
+
+Standard Options
+
+Here are the standard options specific to local (Local Disk).
+
+--local-nounc
+
+Disable UNC (long path names) conversion on Windows
+
+- Config: nounc
+- Env Var: RCLONE_LOCAL_NOUNC
+- Type: string
+- Default: ""
+- Examples:
+ - "true"
+ - Disables long file names
+
+Advanced Options
+
+Here are the advanced options specific to local (Local Disk).
+
+--copy-links
+
+Follow symlinks and copy the pointed to item.
+
+- Config: copy_links
+- Env Var: RCLONE_LOCAL_COPY_LINKS
+- Type: bool
+- Default: false
--skip-links
-This flag disables warning messages on skipped symlinks or junction
-points, as you explicitly acknowledge that they should be skipped.
+Don't warn about skipped symlinks. This flag disables warning messages
+on skipped symlinks or junction points, as you explicitly acknowledge
+that they should be skipped.
+
+- Config: skip_links
+- Env Var: RCLONE_LOCAL_SKIP_LINKS
+- Type: bool
+- Default: false
+
+--local-no-unicode-normalization
+
+Don't apply unicode normalization to paths and filenames (Deprecated)
+
+This flag is deprecated now. Rclone no longer normalizes unicode file
+names, but it compares them with unicode normalization in the sync
+routine instead.
+
+- Config: no_unicode_normalization
+- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+- Type: bool
+- Default: false
+
+--local-no-check-updated
+
+Don't check to see if the files change during upload
+
+Normally rclone checks the size and modification time of files as they
+are being uploaded and aborts with a message which starts "can't copy -
+source file is being updated" if the file changes during upload.
+
+However on some file systems this modification time check may fail (eg
+Glusterfs #2206) so this check can be disabled with this flag.
+
+- Config: no_check_updated
+- Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
+- Type: bool
+- Default: false
+
+--one-file-system
+
+Don't cross filesystem boundaries (unix/macOS only).
+
+- Config: one_file_system
+- Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
+- Type: bool
+- Default: false
CHANGELOG
-v1.42 - 2018-09-01
+v1.44 - 2018-10-15
+
+- New commands
+ - serve ftp: Add ftp server (Antoine GIRARD)
+ - settier: perform storage tier changes on supported remotes
+ (sandeepkru)
+- New Features
+ - Reworked command line help
+ - Make default help less verbose (Nick Craig-Wood)
+ - Split flags up into global and backend flags (Nick
+ Craig-Wood)
+ - Implement specialised help for flags and backends (Nick
+ Craig-Wood)
+ - Show URL of backend help page when starting config (Nick
+ Craig-Wood)
+ - stats: Long names now split in center (Joanna Marek)
+ - Add --log-format flag for more control over log output (dcpu)
+ - rc: Add support for OPTIONS and basic CORS (frenos)
+ - stats: show FatalErrors and NoRetryErrors in stats (Cédric
+ Connes)
+- Bug Fixes
+ - Fix -P not ending with a new line (Nick Craig-Wood)
+ - config: don't create default config dir when user supplies
+ --config (albertony)
+ - Don't print non-ASCII characters with --progress on windows
+ (Nick Craig-Wood)
+ - Correct logs for excluded items (ssaqua)
+- Mount
+ - Remove EXPERIMENTAL tags (Nick Craig-Wood)
+- VFS
+ - Fix race condition detected by serve ftp tests (Nick Craig-Wood)
+ - Add vfs/poll-interval rc command (Fabian Möller)
+ - Enable rename for nearly all remotes using server side Move or
+ Copy (Nick Craig-Wood)
+ - Reduce directory cache cleared by poll-interval (Fabian Möller)
+ - Remove EXPERIMENTAL tags (Nick Craig-Wood)
+- Local
+ - Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
+ - Preallocate files on Windows to reduce fragmentation (Nick
+ Craig-Wood)
+ - Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
+- Cache
+ - Add cache/fetch rc function (Fabian Möller)
+ - Fix worker scale down (Fabian Möller)
+ - Improve performance by not sending info requests for cached
+ chunks (dcpu)
+ - Fix error return value of cache/fetch rc method (Fabian Möller)
+ - Documentation fix for cache-chunk-total-size (Anagh Kumar
+ Baranwal)
+ - Preserve leading / in wrapped remote path (Fabian Möller)
+ - Add plex_insecure option to skip certificate validation (Fabian
+ Möller)
+ - Remove entries that no longer exist in the source (dcpu)
+- Crypt
+ - Preserve leading / in wrapped remote path (Fabian Möller)
+- Alias
+ - Fix handling of Windows network paths (Nick Craig-Wood)
+- Azure Blob
+ - Add --azureblob-list-chunk parameter (Santiago Rodríguez)
+ - Implemented settier command support on azureblob remote.
+ (sandeepkru)
+ - Work around SDK bug which causes errors for chunk-sized files
+ (Nick Craig-Wood)
+- Box
+ - Implement link sharing. (Sebastian Bünger)
+- Drive
+ - Add --drive-import-formats - google docs can now be imported
+ (Fabian Möller)
+ - Rewrite mime type and extension handling (Fabian Möller)
+ - Add document links (Fabian Möller)
+ - Add support for multipart document extensions (Fabian
+ Möller)
+ - Add support for apps-script to json export (Fabian Möller)
+ - Fix escaped chars in documents during list (Fabian Möller)
+ - Add --drive-v2-download-min-size a workaround for slow downloads
+ (Fabian Möller)
+ - Improve directory notifications in ChangeNotify (Fabian Möller)
+ - When listing team drives in config, continue on failure (Nick
+ Craig-Wood)
+- FTP
+ - Add a small pause after failed upload before deleting file (Nick
+ Craig-Wood)
+- Google Cloud Storage
+ - Fix service_account_file being ignored (Fabian Möller)
+- Jottacloud
+ - Minor improvement in quota info (omit if unlimited) (albertony)
+ - Add --fast-list support (albertony)
+ - Add permanent delete support: --jottacloud-hard-delete
+ (albertony)
+ - Add link sharing support (albertony)
+ - Fix handling of reserved characters. (Sebastian Bünger)
+ - Fix socket leak on Object.Remove (Nick Craig-Wood)
+- Onedrive
+ - Rework to support Microsoft Graph (Cnly)
+ - NB this will require re-authenticating the remote
+ - Removed upload cutoff and always do session uploads (Oliver
+ Heyme)
+ - Use single-part upload for empty files (Cnly)
+ - Fix new fields not saved when editing old config (Alex Chen)
+ - Fix sometimes special chars in filenames not replaced (Alex
+ Chen)
+ - Ignore OneNote files by default (Alex Chen)
+ - Add link sharing support (jackyzy823)
+- S3
+ - Use custom pacer, to retry operations when reasonable (Craig
+ Miskell)
+ - Use configured server-side-encryption and storace class options
+ when calling CopyObject() (Paul Kohout)
+ - Make --s3-v2-auth flag (Nick Craig-Wood)
+ - Fix v2 auth on files with spaces (Nick Craig-Wood)
+- Union
+ - Implement union backend which reads from multiple backends
+ (Felix Brucker)
+ - Implement optional interfaces (Move, DirMove, Copy etc) (Nick
+ Craig-Wood)
+ - Fix ChangeNotify to support multiple remotes (Fabian Möller)
+ - Fix --backup-dir on union backend (Nick Craig-Wood)
+- WebDAV
+ - Add another time format (Nick Craig-Wood)
+ - Add a small pause after failed upload before deleting file (Nick
+ Craig-Wood)
+ - Add workaround for missing mtime (buergi)
+ - Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
+- Yandex
+ - Remove redundant nil checks (teresy)
+
+
+v1.43.1 - 2018-09-07
+
+Point release to fix hubic and azureblob backends.
+
+- Bug Fixes
+ - ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
+ - cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)
+ - docs: Tidy website display (Anagh Kumar Baranwal)
+- Azure Blob:
+ - Fix multi-part uploads. (sandeepkru)
+- Hubic
+ - Fix uploads (Nick Craig-Wood)
+ - Retry auth fetching if it fails to make hubic more reliable
+ (Nick Craig-Wood)
+
+
+v1.43 - 2018-09-01
- New backends
- Jottacloud (Sebastian Bünger)
@@ -13214,6 +16074,7 @@ Contributors
- Onno Zweers onno.zweers@surfsara.nl
- Jasper Lievisse Adriaanse jasper@humppa.nl
- sandeepkru sandeep.ummadi@gmail.com
+ sandeepkru@users.noreply.github.com
- HerrH atomtigerzoo@users.noreply.github.com
- Andrew 4030760+sparkyman215@users.noreply.github.com
- dan smith XX1011@gmail.com
@@ -13228,6 +16089,28 @@ Contributors
- Alex Chen Cnly@users.noreply.github.com
- Denis deniskovpen@gmail.com
- bsteiss 35940619+bsteiss@users.noreply.github.com
+- Cédric Connes cedric.connes@gmail.com
+- Dr. Tobias Quathamer toddy15@users.noreply.github.com
+- dcpu 42736967+dcpu@users.noreply.github.com
+- Sheldon Rupp me@shel.io
+- albertony 12441419+albertony@users.noreply.github.com
+- cron410 cron410@gmail.com
+- Anagh Kumar Baranwal anaghk.dos@gmail.com
+- Felix Brucker felix@felixbrucker.com
+- Santiago Rodríguez scollazo@users.noreply.github.com
+- Craig Miskell craig.miskell@fluxfederation.com
+- Antoine GIRARD sapk@sapk.fr
+- Joanna Marek joanna.marek@u2i.com
+- frenos frenos@users.noreply.github.com
+- ssaqua ssaqua@users.noreply.github.com
+- xnaas me@xnaas.info
+- Frantisek Fuka fuka@fuxoft.cz
+- Paul Kohout pauljkohout@yahoo.com
+- dcpu 43330287+dcpu@users.noreply.github.com
+- jackyzy823 jackyzy823@gmail.com
+- David Haguenauer ml@kurokatta.org
+- teresy hi.teresy@gmail.com
+- buergi patbuergi@gmx.de
diff --git a/bin/make_changelog.py b/bin/make_changelog.py
index 0c18d68ae..03e79a6b0 100755
--- a/bin/make_changelog.py
+++ b/bin/make_changelog.py
@@ -165,7 +165,7 @@ def main():
%s
* Bug Fixes
%s
-%s""" % (version, datetime.date.today(), "\n".join(new_features_lines), "\n".join(bugfix_lines), "\n".join(backend_lines)))
+%s""" % (next_version, datetime.date.today(), "\n".join(new_features_lines), "\n".join(bugfix_lines), "\n".join(backend_lines)))
sys.stdout.write(old_tail)
diff --git a/docs/content/b2.md b/docs/content/b2.md
index 708423f26..a7fecae25 100644
--- a/docs/content/b2.md
+++ b/docs/content/b2.md
@@ -381,7 +381,7 @@ This value should be set no larger than 4.657GiB (== 5GB).
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
-- Default: 190.735M
+- Default: 200M
#### --b2-chunk-size
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index bc9098e6e..f8e07b5c0 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -1,11 +1,110 @@
---
title: "Documentation"
description: "Rclone Changelog"
-date: "2018-09-01"
+date: "2018-10-15"
---
# Changelog
+## v1.44 - 2018-10-15
+
+* New commands
+ * serve ftp: Add ftp server (Antoine GIRARD)
+ * settier: perform storage tier changes on supported remotes (sandeepkru)
+* New Features
+ * Reworked command line help
+ * Make default help less verbose (Nick Craig-Wood)
+ * Split flags up into global and backend flags (Nick Craig-Wood)
+ * Implement specialised help for flags and backends (Nick Craig-Wood)
+ * Show URL of backend help page when starting config (Nick Craig-Wood)
+ * stats: Long names now split in center (Joanna Marek)
+ * Add --log-format flag for more control over log output (dcpu)
+ * rc: Add support for OPTIONS and basic CORS (frenos)
+ * stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
+* Bug Fixes
+ * Fix -P not ending with a new line (Nick Craig-Wood)
+ * config: don't create default config dir when user supplies --config (albertony)
+ * Don't print non-ASCII characters with --progress on windows (Nick Craig-Wood)
+ * Correct logs for excluded items (ssaqua)
+* Mount
+ * Remove EXPERIMENTAL tags (Nick Craig-Wood)
+* VFS
+ * Fix race condition detected by serve ftp tests (Nick Craig-Wood)
+ * Add vfs/poll-interval rc command (Fabian Möller)
+ * Enable rename for nearly all remotes using server side Move or Copy (Nick Craig-Wood)
+ * Reduce directory cache cleared by poll-interval (Fabian Möller)
+ * Remove EXPERIMENTAL tags (Nick Craig-Wood)
+* Local
+ * Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
+ * Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood)
+ * Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
+* Cache
+ * Add cache/fetch rc function (Fabian Möller)
+ * Fix worker scale down (Fabian Möller)
+ * Improve performance by not sending info requests for cached chunks (dcpu)
+ * Fix error return value of cache/fetch rc method (Fabian Möller)
+ * Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal)
+ * Preserve leading / in wrapped remote path (Fabian Möller)
+ * Add plex_insecure option to skip certificate validation (Fabian Möller)
+ * Remove entries that no longer exist in the source (dcpu)
+* Crypt
+ * Preserve leading / in wrapped remote path (Fabian Möller)
+* Alias
+ * Fix handling of Windows network paths (Nick Craig-Wood)
+* Azure Blob
+ * Add --azureblob-list-chunk parameter (Santiago Rodríguez)
+ * Implemented settier command support on azureblob remote. (sandeepkru)
+ * Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood)
+* Box
+ * Implement link sharing. (Sebastian Bünger)
+* Drive
+ * Add --drive-import-formats - google docs can now be imported (Fabian Möller)
+ * Rewrite mime type and extension handling (Fabian Möller)
+ * Add document links (Fabian Möller)
+ * Add support for multipart document extensions (Fabian Möller)
+ * Add support for apps-script to json export (Fabian Möller)
+ * Fix escaped chars in documents during list (Fabian Möller)
+ * Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller)
+ * Improve directory notifications in ChangeNotify (Fabian Möller)
+ * When listing team drives in config, continue on failure (Nick Craig-Wood)
+* FTP
+ * Add a small pause after failed upload before deleting file (Nick Craig-Wood)
+* Google Cloud Storage
+ * Fix service_account_file being ignored (Fabian Möller)
+* Jottacloud
+ * Minor improvement in quota info (omit if unlimited) (albertony)
+ * Add --fast-list support (albertony)
+ * Add permanent delete support: --jottacloud-hard-delete (albertony)
+ * Add link sharing support (albertony)
+ * Fix handling of reserved characters. (Sebastian Bünger)
+ * Fix socket leak on Object.Remove (Nick Craig-Wood)
+* Onedrive
+ * Rework to support Microsoft Graph (Cnly)
+ * **NB** this will require re-authenticating the remote
+ * Removed upload cutoff and always do session uploads (Oliver Heyme)
+ * Use single-part upload for empty files (Cnly)
+ * Fix new fields not saved when editing old config (Alex Chen)
+ * Fix sometimes special chars in filenames not replaced (Alex Chen)
+ * Ignore OneNote files by default (Alex Chen)
+ * Add link sharing support (jackyzy823)
+* S3
+ * Use custom pacer, to retry operations when reasonable (Craig Miskell)
+ * Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout)
+ * Make --s3-v2-auth flag (Nick Craig-Wood)
+ * Fix v2 auth on files with spaces (Nick Craig-Wood)
+* Union
+ * Implement union backend which reads from multiple backends (Felix Brucker)
+ * Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood)
+ * Fix ChangeNotify to support multiple remotes (Fabian Möller)
+ * Fix --backup-dir on union backend (Nick Craig-Wood)
+* WebDAV
+ * Add another time format (Nick Craig-Wood)
+ * Add a small pause after failed upload before deleting file (Nick Craig-Wood)
+ * Add workaround for missing mtime (buergi)
+ * Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
+* Yandex
+ * Remove redundant nil checks (teresy)
+
## v1.43.1 - 2018-09-07
Point release to fix hubic and azureblob backends.
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 134b09d53..8666d0d08 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -1,56 +1,22 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
-Sync files and directories to and from local and remote object stores - v1.43
+Show help for rclone commands, flags and backends.
### Synopsis
-Rclone is a command line program to sync files and directories to and
-from various cloud storage systems and using file transfer services, such as:
+Rclone syncs files to and from cloud storage providers as well as
+mounting them, listing them in lots of different ways.
- * Amazon Drive
- * Amazon S3
- * Backblaze B2
- * Box
- * Dropbox
- * FTP
- * Google Cloud Storage
- * Google Drive
- * HTTP
- * Hubic
- * Jottacloud
- * Mega
- * Microsoft Azure Blob Storage
- * Microsoft OneDrive
- * OpenDrive
- * Openstack Swift / Rackspace cloud files / Memset Memstore
- * pCloud
- * QingStor
- * SFTP
- * Webdav / Owncloud / Nextcloud
- * Yandex Disk
- * The local filesystem
+See the home page (https://rclone.org/) for installation, usage,
+documentation, changelog and configuration walkthroughs.
-Features
-
- * MD5/SHA1 hashes checked at all times for file integrity
- * Timestamps preserved on files
- * Partial syncs supported on a whole file basis
- * Copy mode to just copy new/changed files
- * Sync (one way) mode to make a directory identical
- * Check mode to check for file hash equality
- * Can sync to and from network, eg two different cloud accounts
-
-See the home page for installation, usage, documentation, changelog
-and configuration walkthroughs.
-
- * https://rclone.org/
```
@@ -60,259 +26,277 @@ rclone [flags]
### Options
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- -h, --help help for rclone
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- -V, --version Print the version number
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ -h, --help help for rclone
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ -V, --version Print the version number
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
@@ -345,7 +329,7 @@ rclone [flags]
* [rclone lsl](/commands/rclone_lsl/) - List the objects in path with modification time, size and path.
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist.
-* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL**
+* [rclone mount](/commands/rclone_mount/) - Mount the remote as file system on a mountpoint.
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
* [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest.
* [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface.
@@ -356,6 +340,7 @@ rclone [flags]
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path.
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
+* [rclone settier](/commands/rclone_settier/) - Changes storage class/tier of objects in remote.
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
@@ -363,4 +348,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_about.md b/docs/content/commands/rclone_about.md
index 16b410504..d64003e6e 100644
--- a/docs/content/commands/rclone_about.md
+++ b/docs/content/commands/rclone_about.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone about"
slug: rclone_about
url: /commands/rclone_about/
@@ -69,261 +69,279 @@ rclone about remote: [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md
index c32605fe5..622a48594 100644
--- a/docs/content/commands/rclone_authorize.md
+++ b/docs/content/commands/rclone_authorize.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -28,261 +28,279 @@ rclone authorize [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_cachestats.md b/docs/content/commands/rclone_cachestats.md
index 9ef304c29..426b55ab7 100644
--- a/docs/content/commands/rclone_cachestats.md
+++ b/docs/content/commands/rclone_cachestats.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/
@@ -27,261 +27,279 @@ rclone cachestats source: [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md
index 3f8f57c9b..f7f8588b1 100644
--- a/docs/content/commands/rclone_cat.md
+++ b/docs/content/commands/rclone_cat.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -49,261 +49,279 @@ rclone cat remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md
index 3d463473e..12f525771 100644
--- a/docs/content/commands/rclone_check.md
+++ b/docs/content/commands/rclone_check.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -43,261 +43,279 @@ rclone check source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md
index 3bf7c74f3..d5a0da62c 100644
--- a/docs/content/commands/rclone_cleanup.md
+++ b/docs/content/commands/rclone_cleanup.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -28,261 +28,279 @@ rclone cleanup remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md
index c8c5c840a..4a134be20 100644
--- a/docs/content/commands/rclone_config.md
+++ b/docs/content/commands/rclone_config.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -28,262 +28,280 @@ rclone config [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote .
* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
@@ -294,4 +312,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md
index eb27c5f50..53aec11ac 100644
--- a/docs/content/commands/rclone_config_create.md
+++ b/docs/content/commands/rclone_config_create.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/
@@ -33,261 +33,279 @@ rclone config create [ ]* [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_delete.md b/docs/content/commands/rclone_config_delete.md
index b331f41dd..92c6d536d 100644
--- a/docs/content/commands/rclone_config_delete.md
+++ b/docs/content/commands/rclone_config_delete.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/
@@ -25,261 +25,279 @@ rclone config delete [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_dump.md b/docs/content/commands/rclone_config_dump.md
index 3aeccb0c2..4f0b64685 100644
--- a/docs/content/commands/rclone_config_dump.md
+++ b/docs/content/commands/rclone_config_dump.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/
@@ -25,261 +25,279 @@ rclone config dump [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_edit.md b/docs/content/commands/rclone_config_edit.md
index d46d86791..3a6ce7dba 100644
--- a/docs/content/commands/rclone_config_edit.md
+++ b/docs/content/commands/rclone_config_edit.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/
@@ -28,261 +28,279 @@ rclone config edit [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_file.md b/docs/content/commands/rclone_config_file.md
index 0d43372e1..d92696629 100644
--- a/docs/content/commands/rclone_config_file.md
+++ b/docs/content/commands/rclone_config_file.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/
@@ -25,261 +25,279 @@ rclone config file [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_password.md b/docs/content/commands/rclone_config_password.md
index 552a59e69..b1b35b72d 100644
--- a/docs/content/commands/rclone_config_password.md
+++ b/docs/content/commands/rclone_config_password.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/
@@ -32,261 +32,279 @@ rclone config password [ ]+ [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_providers.md b/docs/content/commands/rclone_config_providers.md
index a18fc4b52..cd138e300 100644
--- a/docs/content/commands/rclone_config_providers.md
+++ b/docs/content/commands/rclone_config_providers.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config providers"
slug: rclone_config_providers
url: /commands/rclone_config_providers/
@@ -25,261 +25,279 @@ rclone config providers [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_show.md b/docs/content/commands/rclone_config_show.md
index 62eee38fb..8419d0965 100644
--- a/docs/content/commands/rclone_config_show.md
+++ b/docs/content/commands/rclone_config_show.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config show"
slug: rclone_config_show
url: /commands/rclone_config_show/
@@ -25,261 +25,279 @@ rclone config show [] [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md
index 8c8200917..742c5222f 100644
--- a/docs/content/commands/rclone_config_update.md
+++ b/docs/content/commands/rclone_config_update.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone config update"
slug: rclone_config_update
url: /commands/rclone_config_update/
@@ -32,261 +32,279 @@ rclone config update [ ]+ [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index fe86f71b8..52e6369d4 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@@ -61,261 +61,279 @@ rclone copy source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
index b085d3dd3..45b36666d 100644
--- a/docs/content/commands/rclone_copyto.md
+++ b/docs/content/commands/rclone_copyto.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
@@ -51,261 +51,279 @@ rclone copyto source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md
index 39d8d01b3..5d5f5ee05 100644
--- a/docs/content/commands/rclone_copyurl.md
+++ b/docs/content/commands/rclone_copyurl.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone copyurl"
slug: rclone_copyurl
url: /commands/rclone_copyurl/
@@ -28,261 +28,279 @@ rclone copyurl https://example.com dest:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md
index ea1331f24..a747731b0 100644
--- a/docs/content/commands/rclone_cryptcheck.md
+++ b/docs/content/commands/rclone_cryptcheck.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
@@ -53,261 +53,279 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md
index 3e5a0aec1..2463738e4 100644
--- a/docs/content/commands/rclone_cryptdecode.md
+++ b/docs/content/commands/rclone_cryptdecode.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/
@@ -37,261 +37,279 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md
index bb1fe2173..c89ce78cd 100644
--- a/docs/content/commands/rclone_dbhashsum.md
+++ b/docs/content/commands/rclone_dbhashsum.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/
@@ -30,261 +30,279 @@ rclone dbhashsum remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md
index f00040b71..acad65614 100644
--- a/docs/content/commands/rclone_dedupe.md
+++ b/docs/content/commands/rclone_dedupe.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@@ -106,261 +106,279 @@ rclone dedupe [mode] remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md
index fe41c9709..96544fedd 100644
--- a/docs/content/commands/rclone_delete.md
+++ b/docs/content/commands/rclone_delete.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@@ -42,261 +42,279 @@ rclone delete remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_deletefile.md b/docs/content/commands/rclone_deletefile.md
index 506527e97..13403a4ae 100644
--- a/docs/content/commands/rclone_deletefile.md
+++ b/docs/content/commands/rclone_deletefile.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone deletefile"
slug: rclone_deletefile
url: /commands/rclone_deletefile/
@@ -29,261 +29,279 @@ rclone deletefile remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md
index d1eeace21..900cb36d2 100644
--- a/docs/content/commands/rclone_genautocomplete.md
+++ b/docs/content/commands/rclone_genautocomplete.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@@ -24,263 +24,281 @@ Run with --help to list the supported shells.
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_genautocomplete_bash.md b/docs/content/commands/rclone_genautocomplete_bash.md
index 421c5fa54..48f79e3c8 100644
--- a/docs/content/commands/rclone_genautocomplete_bash.md
+++ b/docs/content/commands/rclone_genautocomplete_bash.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/
@@ -40,261 +40,279 @@ rclone genautocomplete bash [output_file] [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_genautocomplete_zsh.md b/docs/content/commands/rclone_genautocomplete_zsh.md
index ff3fc78fd..a04a77ffd 100644
--- a/docs/content/commands/rclone_genautocomplete_zsh.md
+++ b/docs/content/commands/rclone_genautocomplete_zsh.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/
@@ -40,261 +40,279 @@ rclone genautocomplete zsh [output_file] [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md
index 068dfb0cf..cea1d1600 100644
--- a/docs/content/commands/rclone_gendocs.md
+++ b/docs/content/commands/rclone_gendocs.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@@ -28,261 +28,279 @@ rclone gendocs output_directory [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md
index 3ae7b8974..88b3191e9 100644
--- a/docs/content/commands/rclone_hashsum.md
+++ b/docs/content/commands/rclone_hashsum.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone hashsum"
slug: rclone_hashsum
url: /commands/rclone_hashsum/
@@ -42,261 +42,279 @@ rclone hashsum remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_link.md b/docs/content/commands/rclone_link.md
index b4059f8a6..7b6dc6bc3 100644
--- a/docs/content/commands/rclone_link.md
+++ b/docs/content/commands/rclone_link.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone link"
slug: rclone_link
url: /commands/rclone_link/
@@ -35,261 +35,279 @@ rclone link remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md
index 780c5255f..7511cc0b0 100644
--- a/docs/content/commands/rclone_listremotes.md
+++ b/docs/content/commands/rclone_listremotes.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@@ -30,261 +30,279 @@ rclone listremotes [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index c3a98ba99..eac46198f 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@@ -59,261 +59,279 @@ rclone ls remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index 8ff0d4a28..1795f08df 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@@ -70,261 +70,279 @@ rclone lsd remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md
index 46ee882c4..e69cdeac6 100644
--- a/docs/content/commands/rclone_lsf.md
+++ b/docs/content/commands/rclone_lsf.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone lsf"
slug: rclone_lsf
url: /commands/rclone_lsf/
@@ -148,261 +148,279 @@ rclone lsf remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md
index 11c84c313..88e875a87 100644
--- a/docs/content/commands/rclone_lsjson.md
+++ b/docs/content/commands/rclone_lsjson.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone lsjson"
slug: rclone_lsjson
url: /commands/rclone_lsjson/
@@ -88,261 +88,279 @@ rclone lsjson remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index 7c82288ed..27855f4bd 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@@ -59,261 +59,279 @@ rclone lsl remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md
index 34e34a110..ec03068ad 100644
--- a/docs/content/commands/rclone_md5sum.md
+++ b/docs/content/commands/rclone_md5sum.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@@ -28,261 +28,279 @@ rclone md5sum remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md
index db440d376..afd750169 100644
--- a/docs/content/commands/rclone_mkdir.md
+++ b/docs/content/commands/rclone_mkdir.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@@ -25,261 +25,279 @@ rclone mkdir remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index 350111dab..acc7de6dd 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -1,12 +1,12 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
---
## rclone mount
-Mount the remote as a mountpoint. **EXPERIMENTAL**
+Mount the remote as file system on a mountpoint.
### Synopsis
@@ -15,8 +15,6 @@ rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
-This is **EXPERIMENTAL** - use with care.
-
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
Start the mount like this
@@ -91,8 +89,8 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
-uploads. Look at the **EXPERIMENTAL** [file caching](#file-caching)
-for solutions to make mount more reliable.
+uploads. Look at the [file caching](#file-caching)
+for solutions to make mount mount more reliable.
### Attribute caching
@@ -201,8 +199,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching
-**NB** File caching is **EXPERIMENTAL** - use with care!
-
These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a
normal file system.
@@ -329,261 +325,279 @@ rclone mount remote:path /path/to/mountpoint [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index 47254a606..d4235eec0 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@@ -45,261 +45,279 @@ rclone move source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
index 6b836e836..7fa1ced03 100644
--- a/docs/content/commands/rclone_moveto.md
+++ b/docs/content/commands/rclone_moveto.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone moveto"
slug: rclone_moveto
url: /commands/rclone_moveto/
@@ -54,261 +54,279 @@ rclone moveto source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md
index ec3a1b424..09c6755e1 100644
--- a/docs/content/commands/rclone_ncdu.md
+++ b/docs/content/commands/rclone_ncdu.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone ncdu"
slug: rclone_ncdu
url: /commands/rclone_ncdu/
@@ -52,261 +52,279 @@ rclone ncdu remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md
index 11c179d2b..d9a186985 100644
--- a/docs/content/commands/rclone_obscure.md
+++ b/docs/content/commands/rclone_obscure.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone obscure"
slug: rclone_obscure
url: /commands/rclone_obscure/
@@ -25,261 +25,279 @@ rclone obscure password [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md
index 4add9cf4a..f91ae8eaa 100644
--- a/docs/content/commands/rclone_purge.md
+++ b/docs/content/commands/rclone_purge.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@@ -29,261 +29,279 @@ rclone purge remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_rc.md b/docs/content/commands/rclone_rc.md
index 75e7a0f12..97482ee33 100644
--- a/docs/content/commands/rclone_rc.md
+++ b/docs/content/commands/rclone_rc.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone rc"
slug: rclone_rc
url: /commands/rclone_rc/
@@ -35,261 +35,279 @@ rclone rc commands parameter [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md
index 6ea7090d5..dc5e095c3 100644
--- a/docs/content/commands/rclone_rcat.md
+++ b/docs/content/commands/rclone_rcat.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone rcat"
slug: rclone_rcat
url: /commands/rclone_rcat/
@@ -47,261 +47,279 @@ rclone rcat remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md
index 5bdf4f88d..e87b68056 100644
--- a/docs/content/commands/rclone_rmdir.md
+++ b/docs/content/commands/rclone_rmdir.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@@ -27,261 +27,279 @@ rclone rmdir remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md
index fb7d8f78d..1912373f3 100644
--- a/docs/content/commands/rclone_rmdirs.md
+++ b/docs/content/commands/rclone_rmdirs.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone rmdirs"
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/
@@ -35,261 +35,279 @@ rclone rmdirs remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md
index 3b83fe03e..1c4850cf7 100644
--- a/docs/content/commands/rclone_serve.md
+++ b/docs/content/commands/rclone_serve.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone serve"
slug: rclone_serve
url: /commands/rclone_serve/
@@ -31,264 +31,283 @@ rclone serve [opts] [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
+* [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md
new file mode 100644
index 000000000..c72401e65
--- /dev/null
+++ b/docs/content/commands/rclone_serve_ftp.md
@@ -0,0 +1,469 @@
+---
+date: 2018-10-15T11:00:47+01:00
+title: "rclone serve ftp"
+slug: rclone_serve_ftp
+url: /commands/rclone_serve_ftp/
+---
+## rclone serve ftp
+
+Serve remote:path over FTP.
+
+### Synopsis
+
+
+rclone serve ftp implements a basic ftp server to serve the
+remote over FTP protocol. This can be viewed with a ftp client
+or you can make a remote of type ftp to read and write it.
+
+### Server options
+
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost. You can use port
+:0 to let the OS choose an available port.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication is advised - see the next section for info.
+
+#### Authentication
+
+By default this will serve files without needing a login.
+
+You can set a single username and password with the --user and --pass flags.
+
+### Directory Cache
+
+Using the `--dir-cache-time` flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made locally in the mount may appear immediately or
+invalidate the cache. However, changes done on the remote will only
+be picked up once the cache expires.
+
+Alternatively, you can send a `SIGHUP` signal to rclone for
+it to flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+### File Buffering
+
+The `--buffer-size` flag determines the amount of memory,
+that will be used to buffer data in advance.
+
+Each open file descriptor will try to keep the specified amount of
+data in memory at all times. The buffered data is bound to one file
+descriptor and won't be shared between multiple open file descriptors
+of the same file.
+
+This flag is a upper limit for the used memory per file descriptor.
+The buffer will only use memory for data that is downloaded but not
+not yet read. If the buffer is empty, only a small amount of memory
+will be used.
+The maximum memory used by rclone for buffering can be up to
+`--buffer-size * open files`.
+
+### File Caching
+
+These flags control the VFS file caching options. The VFS layer is
+used by rclone mount to make a cloud storage system work more like a
+normal file system.
+
+You'll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed so if rclone is quit or dies with open files then these won't
+get written back to the remote. However they will still be in the on
+disk cache.
+
+#### --vfs-cache-mode off
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+#### --vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+a file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at
+the cache backend which does a much more sophisticated job of caching,
+including caching directory hierarchies and chunks of files.
+
+In this mode, unlike the others, when a file is written to the disk,
+it will be kept on the disk after it is written to the remote. It
+will be purged on a schedule according to `--vfs-cache-max-age`.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+
+```
+rclone serve ftp remote:path [flags]
+```
+
+### Options
+
+```
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for ftp
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --pass string Password for authentication. (empty value allow every password)
+ --passive-port string Passive port range to use. (default "30000-32000")
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication. (default "anonymous")
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
+ --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
+```
+
+### SEE ALSO
+
+* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
+
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
index 901d278ab..7aab35187 100644
--- a/docs/content/commands/rclone_serve_http.md
+++ b/docs/content/commands/rclone_serve_http.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone serve http"
slug: rclone_serve_http
url: /commands/rclone_serve_http/
@@ -115,8 +115,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching
-**NB** File caching is **EXPERIMENTAL** - use with care!
-
These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a
normal file system.
@@ -241,261 +239,279 @@ rclone serve http remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md
index a9e4b3a19..89a558737 100644
--- a/docs/content/commands/rclone_serve_restic.md
+++ b/docs/content/commands/rclone_serve_restic.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone serve restic"
slug: rclone_serve_restic
url: /commands/rclone_serve_restic/
@@ -161,261 +161,279 @@ rclone serve restic remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index 771aea630..c51267cbc 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone serve webdav"
slug: rclone_serve_webdav
url: /commands/rclone_serve_webdav/
@@ -123,8 +123,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching
-**NB** File caching is **EXPERIMENTAL** - use with care!
-
These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a
normal file system.
@@ -250,261 +248,279 @@ rclone serve webdav remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_settier.md b/docs/content/commands/rclone_settier.md
new file mode 100644
index 000000000..c2eca534d
--- /dev/null
+++ b/docs/content/commands/rclone_settier.md
@@ -0,0 +1,325 @@
+---
+date: 2018-10-15T11:00:47+01:00
+title: "rclone settier"
+slug: rclone_settier
+url: /commands/rclone_settier/
+---
+## rclone settier
+
+Changes storage class/tier of objects in remote.
+
+### Synopsis
+
+
+rclone settier changes storage tier or class at remote if supported.
+Few cloud storage services provides different storage classes on objects,
+for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive,
+Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
+
+Note that, certain tier chages make objects not available to access immediately.
+For example tiering to archive in azure blob storage makes objects in frozen state,
+user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object
+inaccessible.true
+
+You can use it to tier single object
+
+ rclone settier Cool remote:path/file
+
+Or use rclone filters to set tier on only specific files
+
+ rclone --include "*.txt" settier Hot remote:path/dir
+
+Or just provide remote directory and all files in directory will be tiered
+
+ rclone settier tier remote:path/dir
+
+
+```
+rclone settier tier remote:path [flags]
+```
+
+### Options
+
+```
+ -h, --help help for settier
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
+```
+
+### SEE ALSO
+
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
+
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md
index bacb5783c..659744dfc 100644
--- a/docs/content/commands/rclone_sha1sum.md
+++ b/docs/content/commands/rclone_sha1sum.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
@@ -28,261 +28,279 @@ rclone sha1sum remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md
index 03aebd84d..833a19404 100644
--- a/docs/content/commands/rclone_size.md
+++ b/docs/content/commands/rclone_size.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
@@ -26,261 +26,279 @@ rclone size remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index d1c515d5e..a4a8e4206 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
@@ -44,261 +44,279 @@ rclone sync source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_touch.md b/docs/content/commands/rclone_touch.md
index fc6cb3c94..21ee0a7ab 100644
--- a/docs/content/commands/rclone_touch.md
+++ b/docs/content/commands/rclone_touch.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone touch"
slug: rclone_touch
url: /commands/rclone_touch/
@@ -27,261 +27,279 @@ rclone touch remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md
index 52c9c4980..635f105bd 100644
--- a/docs/content/commands/rclone_tree.md
+++ b/docs/content/commands/rclone_tree.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone tree"
slug: rclone_tree
url: /commands/rclone_tree/
@@ -68,261 +68,279 @@ rclone tree remote:path [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md
index 75a02b72e..ea4653629 100644
--- a/docs/content/commands/rclone_version.md
+++ b/docs/content/commands/rclone_version.md
@@ -1,5 +1,5 @@
---
-date: 2018-09-01T12:54:54+01:00
+date: 2018-10-15T11:00:47+01:00
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
@@ -53,261 +53,279 @@ rclone version [flags]
### Options inherited from parent commands
```
- --acd-auth-url string Auth server URL.
- --acd-client-id string Amazon Application Client ID.
- --acd-client-secret string Amazon Application Client Secret.
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
- --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
- --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-client-id string Box App Client Id.
- --box-client-secret string Box App Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
- --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
- --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
- --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
- --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
- --cache-db-purge Purge the cache DB before
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
- --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
- --cache-plex-password string The password of the Plex user
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks (default 4)
- --cache-writes Will cache file data on writes through the FS
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-password string Password or pass phrase for encryption.
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering (default)
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-alternate-export Use alternate export URLs for google documents export.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string Google Application Client Secret
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-keep-revision-forever Keep new head revision forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
- --drive-use-created-date Use created date instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
- --dropbox-client-id string Dropbox App Client Id
- --dropbox-client-secret string Dropbox App Client Secret
- -n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP bodies - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --exclude-if-present string Exclude directories if filename is present
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --ftp-host string FTP host to connect to
- --ftp-pass string FTP password
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-user string FTP username, leave blank for current username, ncw
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-client-id string Google Application Client Id
- --gcs-client-secret string Google Application Client Secret
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --http-url string URL of http host to connect to
- --hubic-client-id string Hubic Client Id
- --hubic-client-secret string Hubic Client Secret
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
- --jottacloud-mountpoint string The mountpoint to use.
- --jottacloud-pass string Password.
- --jottacloud-user string User Name
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
- --max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
- --max-transfer int Maximum size of data to transfer. (default off)
- --mega-debug Output more debug from Mega.
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password.
- --mega-user string User name
- --memprofile string Write memory profile to file
- --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Obsolete - does nothing.
- --no-update-modtime Don't update destination mod-time if files identical.
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
- --onedrive-client-id string Microsoft App Client Id
- --onedrive-client-secret string Microsoft App Client Secret
- --opendrive-password string Password.
- --opendrive-username string Username
- --pcloud-client-id string Pcloud App Client Id
- --pcloud-client-secret string Pcloud App Client Secret
- -P, --progress Show progress during transfer.
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-connection-retries int Number of connnection retries. (default 3)
- --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-zone string Zone to connect to.
- -q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca string Client certificate authority to verify clients with
- --rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-key string SSL PEM Private key
- --rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-pass string Password for authentication.
- --rc-realm string realm for authentication (default "rclone")
- --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-user string User name for authentication.
- --retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- --sftp-pass string SSH password, leave blank to use ssh-agent.
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- --sftp-user string SSH username, leave blank for current username, ncw
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
- -v, --verbose count Print lots more stuff (repeat for more)
- --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
- --webdav-pass string Password.
- --webdav-url string URL of http host to connect to
- --webdav-user string User name
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-client-id string Yandex Client Id
- --yandex-client-secret string Yandex Client Secret
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering (default)
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ftp-host string FTP host to connect to
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-user string FTP username, leave blank for current username, ncw
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-mountpoint string The mountpoint to use.
+ --jottacloud-pass string Password.
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-user string User Name
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-transfer int Maximum size of data to transfer. (default off)
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ --memprofile string Write memory profile to file
+ --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ -P, --progress Show progress during transfer.
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-connection-retries int Number of connnection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-zone string Zone to connect to.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ --union-remotes string List of space separated remotes.
+ -u, --update Skip files that are newer on the destination.
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
+ -v, --verbose count Print lots more stuff (repeat for more)
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
+* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
-###### Auto generated by spf13/cobra on 1-Sep-2018
+###### Auto generated by spf13/cobra on 15-Oct-2018
diff --git a/docs/content/rc.md b/docs/content/rc.md
index 4c029bd60..80cee1e54 100644
--- a/docs/content/rc.md
+++ b/docs/content/rc.md
@@ -81,6 +81,33 @@ Eg
rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true
+### cache/fetch: Fetch file chunks
+
+Ensure the specified file chunks are cached on disk.
+
+The chunks= parameter specifies the file chunks to check.
+It takes a comma separated list of array slice indices.
+The slice indices are similar to Python slices: start[:end]
+
+start is the 0 based chunk number from the beginning of the file
+to fetch inclusive. end is 0 based chunk number from the beginning
+of the file to fetch exclisive.
+Both values can be negative, in which case they count from the back
+of the file. The value "-5:" represents the last 5 chunks of a file.
+
+Some valid examples are:
+":5,-5:" -> the first and last five chunks
+"0,-2" -> the first and the second last chunk
+"0:10" -> the first ten chunks
+
+Any parameter with a key that starts with "file" can be used to
+specify files to fetch, eg
+
+ rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
+
+File names will automatically be encrypted when the a crypt remote
+is used on top of the cache.
+
### cache/stats: Get cache stats
Show statistics for the cache remote.
@@ -133,6 +160,8 @@ Returns the following values:
"speed": average speed in bytes/sec since start of the process,
"bytes": total transferred bytes since the start of the process,
"errors": number of errors,
+ "fatalError": whether there has been at least one FatalError,
+ "retryError": whether there has been at least one non-NoRetryError,
"checks": number of checked files,
"transfers": number of transferred files,
"deletes" : number of deleted files,
@@ -189,6 +218,28 @@ starting with dir will forget that dir, eg
rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
+### vfs/poll-interval: Get the status or update the value of the poll-interval option.
+
+Without any parameter given this returns the current status of the
+poll-interval setting.
+
+When the interval=duration parameter is set, the poll-interval value
+is updated and the polling function is notified.
+Setting interval=0 disables poll-interval.
+
+ rclone rc vfs/poll-interval interval=5m
+
+The timeout=duration parameter can be used to specify a time to wait
+for the current poll function to apply the new value.
+If timeout is less or equal 0, which is the default, wait indefinitely.
+
+The new poll-interval value will only be active when the timeout is
+not reached.
+
+If poll-interval is updated or disabled temporarily, some changes
+might not get picked up by the polling function, depending on the
+used remote.
+
### vfs/refresh: Refresh the directory cache.
This reads the directories for the specified paths and freshens the
diff --git a/docs/layouts/partials/version.html b/docs/layouts/partials/version.html
index 76382bba0..93e7ae287 100644
--- a/docs/layouts/partials/version.html
+++ b/docs/layouts/partials/version.html
@@ -1 +1 @@
-v1.43
\ No newline at end of file
+v1.44
\ No newline at end of file
diff --git a/fs/version.go b/fs/version.go
index 263653147..4144d8751 100644
--- a/fs/version.go
+++ b/fs/version.go
@@ -1,4 +1,4 @@
package fs
// Version of rclone
-var Version = "v1.43-DEV"
+var Version = "v1.44"
diff --git a/rclone.1 b/rclone.1
index b1e5b713a..df4c0f2d3 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 1.19.2.4
.\"
-.TH "rclone" "1" "Sep 01, 2018" "User Manual" ""
+.TH "rclone" "1" "Oct 15, 2018" "User Manual" ""
.hy
.SH Rclone
.PP
@@ -99,9 +99,11 @@ hash equality
.IP \[bu] 2
Can sync to and from network, eg two different cloud accounts
.IP \[bu] 2
-Optional encryption (Crypt (https://rclone.org/crypt/))
+(Encryption (https://rclone.org/crypt/)) backend
.IP \[bu] 2
-Optional cache (Cache (https://rclone.org/cache/))
+(Cache (https://rclone.org/cache/)) backend
+.IP \[bu] 2
+(Union (https://rclone.org/union/)) backend
.IP \[bu] 2
Optional FUSE mount (rclone
mount (https://rclone.org/commands/rclone_mount/))
@@ -110,7 +112,7 @@ Links
.IP \[bu] 2
Home page (https://rclone.org/)
.IP \[bu] 2
-Github project page for source and bug
+GitHub project page for source and bug
tracker (https://github.com/ncw/rclone)
.IP \[bu] 2
Rclone Forum (https://forum.rclone.org)
@@ -357,6 +359,8 @@ QingStor (https://rclone.org/qingstor/)
.IP \[bu] 2
SFTP (https://rclone.org/sftp/)
.IP \[bu] 2
+Union (https://rclone.org/union/)
+.IP \[bu] 2
WebDAV (https://rclone.org/webdav/)
.IP \[bu] 2
Yandex Disk (https://rclone.org/yandex/)
@@ -2222,15 +2226,12 @@ rclone\ lsjson\ remote:path\ [flags]
.fi
.SS rclone mount
.PP
-Mount the remote as a mountpoint.
-\f[B]EXPERIMENTAL\f[]
+Mount the remote as file system on a mountpoint.
.SS Synopsis
.PP
rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of
Rclone\[aq]s cloud storage systems as a file system with FUSE.
.PP
-This is \f[B]EXPERIMENTAL\f[] \- use with care.
-.PP
First set up your remote using \f[C]rclone\ config\f[].
Check it works with \f[C]rclone\ ls\f[] etc.
.PP
@@ -2318,8 +2319,8 @@ systems are a long way from 100% reliable.
The rclone sync/copy commands cope with this with lots of retries.
However rclone mount can\[aq]t use retries in the same way without
making local copies of the uploads.
-Look at the \f[B]EXPERIMENTAL\f[] file caching (#file-caching) for
-solutions to make mount mount more reliable.
+Look at the file caching (#file-caching) for solutions to make mount
+mount more reliable.
.SS Attribute caching
.PP
You can use the flag \-\-attr\-timeout to set the time the kernel caches
@@ -2447,8 +2448,6 @@ The maximum memory used by rclone for buffering can be up to
\f[C]\-\-buffer\-size\ *\ open\ files\f[].
.SS File Caching
.PP
-\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
-.PP
These flags control the VFS file caching options.
The VFS layer is used by rclone mount to make a cloud storage system
work more like a normal file system.
@@ -2842,6 +2841,216 @@ rclone\ serve\ \ [opts]\ \ [flags]
\ \ \-h,\ \-\-help\ \ \ help\ for\ serve
\f[]
.fi
+.SS rclone serve ftp
+.PP
+Serve remote:path over FTP.
+.SS Synopsis
+.PP
+rclone serve ftp implements a basic ftp server to serve the remote over
+FTP protocol.
+This can be viewed with a ftp client or you can make a remote of type
+ftp to read and write it.
+.SS Server options
+.PP
+Use \-\-addr to specify which IP address and port the server should
+listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all
+IPs.
+By default it only listens on localhost.
+You can use port :0 to let the OS choose an available port.
+.PP
+If you set \-\-addr to listen on a public or LAN accessible IP address
+then using Authentication is advised \- see the next section for info.
+.SS Authentication
+.PP
+By default this will serve files without needing a login.
+.PP
+You can set a single username and password with the \-\-user and
+\-\-pass flags.
+.SS Directory Cache
+.PP
+Using the \f[C]\-\-dir\-cache\-time\f[] flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend.
+Changes made locally in the mount may appear immediately or invalidate
+the cache.
+However, changes done on the remote will only be picked up once the
+cache expires.
+.PP
+Alternatively, you can send a \f[C]SIGHUP\f[] signal to rclone for it to
+flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+.IP
+.nf
+\f[C]
+kill\ \-SIGHUP\ $(pidof\ rclone)
+\f[]
+.fi
+.PP
+If you configure rclone with a remote control (/rc) then you can use
+rclone rc to flush the whole directory cache:
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget
+\f[]
+.fi
+.PP
+Or individual files or directories:
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget\ file=path/to/file\ dir=path/to/dir
+\f[]
+.fi
+.SS File Buffering
+.PP
+The \f[C]\-\-buffer\-size\f[] flag determines the amount of memory, that
+will be used to buffer data in advance.
+.PP
+Each open file descriptor will try to keep the specified amount of data
+in memory at all times.
+The buffered data is bound to one file descriptor and won\[aq]t be
+shared between multiple open file descriptors of the same file.
+.PP
+This flag is a upper limit for the used memory per file descriptor.
+The buffer will only use memory for data that is downloaded but not not
+yet read.
+If the buffer is empty, only a small amount of memory will be used.
+The maximum memory used by rclone for buffering can be up to
+\f[C]\-\-buffer\-size\ *\ open\ files\f[].
+.SS File Caching
+.PP
+These flags control the VFS file caching options.
+The VFS layer is used by rclone mount to make a cloud storage system
+work more like a normal file system.
+.PP
+You\[aq]ll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file.
+See below for more details.
+.PP
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+.IP
+.nf
+\f[C]
+\-\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
+\-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
+\-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
+\-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
+\f[]
+.fi
+.PP
+If run with \f[C]\-vv\f[] rclone will print the location of the file
+cache.
+The files are stored in the user cache file area which is OS dependent
+but can be controlled with \f[C]\-\-cache\-dir\f[] or setting the
+appropriate environment variable.
+.PP
+The cache has 4 different modes selected by
+\f[C]\-\-vfs\-cache\-mode\f[].
+The higher the cache mode the more compatible rclone becomes at the cost
+of using disk space.
+.PP
+Note that files are written back to the remote only when they are closed
+so if rclone is quit or dies with open files then these won\[aq]t get
+written back to the remote.
+However they will still be in the on disk cache.
+.SS \-\-vfs\-cache\-mode off
+.PP
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+.PP
+This will mean some operations are not possible
+.IP \[bu] 2
+Files can\[aq]t be opened for both read AND write
+.IP \[bu] 2
+Files opened for write can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files open for read with O_TRUNC will be opened write only
+.IP \[bu] 2
+Files open for write only will behave as if O_TRUNC was supplied
+.IP \[bu] 2
+Open modes O_APPEND, O_TRUNC are ignored
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode minimal
+.PP
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks.
+This means that files opened for write will be a lot more compatible,
+but uses the minimal disk space.
+.PP
+These operations are not possible
+.IP \[bu] 2
+Files opened for write only can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files opened for write only will ignore O_APPEND, O_TRUNC
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode writes
+.PP
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload fails it will be retried up to \-\-low\-level\-retries
+times.
+.SS \-\-vfs\-cache\-mode full
+.PP
+In this mode all reads and writes are buffered to and from disk.
+When a file is opened for read it will be downloaded in its entirety
+first.
+.PP
+This may be appropriate for your needs, or you may prefer to look at the
+cache backend which does a much more sophisticated job of caching,
+including caching directory hierarchies and chunks of files.
+.PP
+In this mode, unlike the others, when a file is written to the disk, it
+will be kept on the disk after it is written to the remote.
+It will be purged on a schedule according to
+\f[C]\-\-vfs\-cache\-max\-age\f[].
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload or download fails it will be retried up to
+\-\-low\-level\-retries times.
+.IP
+.nf
+\f[C]
+rclone\ serve\ ftp\ remote:path\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:2121")
+\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
+\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ ftp
+\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download.
+\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up).
+\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files.
+\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication.\ (empty\ value\ allow\ every\ password)
+\ \ \ \ \ \ \-\-passive\-port\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Passive\ port\ range\ to\ use.\ (default\ "30000\-32000")
+\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s)
+\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only.
+\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
+\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2)
+\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication.\ (default\ "anonymous")
+\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
+\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
+\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
+\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\ int\ \ \ \ \ \ \ \ \ \ \ \ Read\ the\ source\ objects\ in\ chunks.\ (default\ 128M)
+\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ int\ \ \ \ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off)
+\f[]
+.fi
.SS rclone serve http
.PP
Serve the remote over HTTP.
@@ -2969,8 +3178,6 @@ The maximum memory used by rclone for buffering can be up to
\f[C]\-\-buffer\-size\ *\ open\ files\f[].
.SS File Caching
.PP
-\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
-.PP
These flags control the VFS file caching options.
The VFS layer is used by rclone mount to make a cloud storage system
work more like a normal file system.
@@ -3406,8 +3613,6 @@ The maximum memory used by rclone for buffering can be up to
\f[C]\-\-buffer\-size\ *\ open\ files\f[].
.SS File Caching
.PP
-\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
-.PP
These flags control the VFS file caching options.
The VFS layer is used by rclone mount to make a cloud storage system
work more like a normal file system.
@@ -3546,6 +3751,60 @@ rclone\ serve\ webdav\ remote:path\ [flags]
\ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ int\ \ \ \ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off)
\f[]
.fi
+.SS rclone settier
+.PP
+Changes storage class/tier of objects in remote.
+.SS Synopsis
+.PP
+rclone settier changes storage tier or class at remote if supported.
+Few cloud storage services provides different storage classes on
+objects, for example AWS S3 and Glacier, Azure Blob storage \- Hot, Cool
+and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline
+etc.
+.PP
+Note that, certain tier chages make objects not available to access
+immediately.
+For example tiering to archive in azure blob storage makes objects in
+frozen state, user can restore by setting tier to Hot/Cool, similarly S3
+to Glacier makes object inaccessible.true
+.PP
+You can use it to tier single object
+.IP
+.nf
+\f[C]
+rclone\ settier\ Cool\ remote:path/file
+\f[]
+.fi
+.PP
+Or use rclone filters to set tier on only specific files
+.IP
+.nf
+\f[C]
+rclone\ \-\-include\ "*.txt"\ settier\ Hot\ remote:path/dir
+\f[]
+.fi
+.PP
+Or just provide remote directory and all files in directory will be
+tiered
+.IP
+.nf
+\f[C]
+rclone\ settier\ tier\ remote:path/dir
+\f[]
+.fi
+.IP
+.nf
+\f[C]
+rclone\ settier\ tier\ remote:path\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ settier
+\f[]
+.fi
.SS rclone touch
.PP
Create new file or change file modification time.
@@ -4150,6 +4409,12 @@ See the Logging section (#logging) for more info.
Note that if you are using the \f[C]logrotate\f[] program to manage
rclone\[aq]s logs, then you should use the \f[C]copytruncate\f[] option
as rclone doesn\[aq]t have a signal to rotate logs.
+.SS \-\-log\-format LIST
+.PP
+Comma separated list of log format options.
+\f[C]date\f[], \f[C]time\f[], \f[C]microseconds\f[], \f[C]longfile\f[],
+\f[C]shortfile\f[], \f[C]UTC\f[].
+The default is "\f[C]date\f[],\f[C]time\f[]".
.SS \-\-log\-level LEVEL
.PP
This sets the log level for rclone.
@@ -4264,7 +4529,7 @@ remote files if they are incorrect as it would normally.
.PP
This can be used if the remote is being synced with another tool also
(eg the Google Drive client).
-.SS \-\-P, \-\-progress
+.SS \-P, \-\-progress
.PP
This flag makes rclone update the stats in a static block in the
terminal providing a realtime overview of the transfer.
@@ -4278,6 +4543,11 @@ with the \f[C]\-\-stats\f[] flag.
.PP
This can be used with the \f[C]\-\-stats\-one\-line\f[] flag for a
simpler display.
+.PP
+Note: On Windows untilthis
+bug (https://github.com/Azure/go-ansiterm/issues/26) is fixed all
+non\-ASCII characters will be replaced with \f[C]\&.\f[] when
+\f[C]\-\-progress\f[] is in use.
.SS \-q, \-\-quiet
.PP
Normally rclone outputs stats and a completion message.
@@ -4435,6 +4705,8 @@ will be considered.
If the destination does not support server\-side copy or move, rclone
will fall back to the default behaviour and log an error level message
to the console.
+Note: Encrypted destinations are not supported by
+\f[C]\-\-track\-renames\f[].
.PP
Note that \f[C]\-\-track\-renames\f[] uses extra memory to keep track of
all the rename candidates.
@@ -5705,6 +5977,37 @@ rclone\ rc\ cache/expire\ remote=path/to/sub/folder/
rclone\ rc\ cache/expire\ remote=/\ withData=true
\f[]
.fi
+.SS cache/fetch: Fetch file chunks
+.PP
+Ensure the specified file chunks are cached on disk.
+.PP
+The chunks= parameter specifies the file chunks to check.
+It takes a comma separated list of array slice indices.
+The slice indices are similar to Python slices: start[:end]
+.PP
+start is the 0 based chunk number from the beginning of the file to
+fetch inclusive.
+end is 0 based chunk number from the beginning of the file to fetch
+exclisive.
+Both values can be negative, in which case they count from the back of
+the file.
+The value "\-5:" represents the last 5 chunks of a file.
+.PP
+Some valid examples are: ":5,\-5:" \-> the first and last five chunks
+"0,\-2" \-> the first and the second last chunk "0:10" \-> the first ten
+chunks
+.PP
+Any parameter with a key that starts with "file" can be used to specify
+files to fetch, eg
+.IP
+.nf
+\f[C]
+rclone\ rc\ cache/fetch\ chunks=0\ file=hello\ file2=home/goodbye
+\f[]
+.fi
+.PP
+File names will automatically be encrypted when the a crypt remote is
+used on top of the cache.
.SS cache/stats: Get cache stats
.PP
Show statistics for the cache remote.
@@ -5765,6 +6068,8 @@ Returns the following values:
\ \ \ \ "speed":\ average\ speed\ in\ bytes/sec\ since\ start\ of\ the\ process,
\ \ \ \ "bytes":\ total\ transferred\ bytes\ since\ the\ start\ of\ the\ process,
\ \ \ \ "errors":\ number\ of\ errors,
+\ \ \ \ "fatalError":\ whether\ there\ has\ been\ at\ least\ one\ FatalError,
+\ \ \ \ "retryError":\ whether\ there\ has\ been\ at\ least\ one\ non\-NoRetryError,
\ \ \ \ "checks":\ number\ of\ checked\ files,
\ \ \ \ "transfers":\ number\ of\ transferred\ files,
\ \ \ \ "deletes"\ :\ number\ of\ deleted\ files,
@@ -5828,6 +6133,31 @@ starting with dir will forget that dir, eg
rclone\ rc\ vfs/forget\ file=hello\ file2=goodbye\ dir=home/junk
\f[]
.fi
+.SS vfs/poll\-interval: Get the status or update the value of the
+poll\-interval option.
+.PP
+Without any parameter given this returns the current status of the
+poll\-interval setting.
+.PP
+When the interval=duration parameter is set, the poll\-interval value is
+updated and the polling function is notified.
+Setting interval=0 disables poll\-interval.
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/poll\-interval\ interval=5m
+\f[]
+.fi
+.PP
+The timeout=duration parameter can be used to specify a time to wait for
+the current poll function to apply the new value.
+If timeout is less or equal 0, which is the default, wait indefinitely.
+.PP
+The new poll\-interval value will only be active when the timeout is not
+reached.
+.PP
+If poll\-interval is updated or disabled temporarily, some changes might
+not get picked up by the polling function, depending on the used remote.
.SS vfs/refresh: Refresh the directory cache.
.PP
This reads the directories for the specified paths and freshens the
@@ -5873,6 +6203,10 @@ This is formatted to be reasonably human readable.
.PP
If an error occurs then there will be an HTTP error status (usually 400)
and the body of the response will contain a JSON encoded error object.
+.PP
+The sever implements basic CORS support and allows all origins for that.
+The response to a preflight OPTIONS request will echo the requested
+"Access\-Control\-Request\-Headers" back.
.SS Using POST with URL parameters only
.IP
.nf
@@ -6564,7 +6898,7 @@ No
T}@T{
Yes
T}@T{
-No #2178 (https://github.com/ncw/rclone/issues/2178)
+Yes
T}@T{
No
T}
@@ -6707,13 +7041,13 @@ Yes
T}@T{
No
T}@T{
-No
+Yes
T}@T{
No
T}@T{
-No
+Yes
T}@T{
-No
+Yes
T}
T{
Mega
@@ -6774,7 +7108,7 @@ No
T}@T{
No
T}@T{
-No #2178 (https://github.com/ncw/rclone/issues/2178)
+Yes
T}@T{
Yes
T}
@@ -7163,6 +7497,23 @@ Copy another local directory to the alias directory called source
rclone\ copy\ /home/source\ remote:source
\f[]
.fi
+.SS Standard Options
+.PP
+Here are the standard options specific to alias (Alias for a existing
+remote).
+.SS \-\-alias\-remote
+.PP
+Remote or path to alias.
+Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or
+"/local/path".
+.IP \[bu] 2
+Config: remote
+.IP \[bu] 2
+Env Var: RCLONE_ALIAS_REMOTE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS Amazon Drive
.PP
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
@@ -7351,20 +7702,75 @@ When you authenticate with rclone it will take you to an
\f[C]amazon.com\f[] page to log in.
Your \f[C]amazon.co.uk\f[] email and password should work here just
fine.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-acd\-templink\-threshold=SIZE
+Here are the standard options specific to amazon cloud drive (Amazon
+Drive).
+.SS \-\-acd\-client\-id
.PP
-Files this size or more will be downloaded via their \f[C]tempLink\f[].
-This is to work around a problem with Amazon Drive which blocks
-downloads of files bigger than about 10GB.
-The default for this is 9GB which shouldn\[aq]t need to be changed.
+Amazon Application Client ID.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_ACD_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-acd\-client\-secret
.PP
-To download files above this threshold, rclone requests a
-\f[C]tempLink\f[] which downloads the file through a temporary URL
-directly from the underlying S3 storage.
-.SS \-\-acd\-upload\-wait\-per\-gb=TIME
+Amazon Application Client Secret.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_ACD_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to amazon cloud drive (Amazon
+Drive).
+.SS \-\-acd\-auth\-url
+.PP
+Auth server URL.
+Leave blank to use Amazon\[aq]s.
+.IP \[bu] 2
+Config: auth_url
+.IP \[bu] 2
+Env Var: RCLONE_ACD_AUTH_URL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-acd\-token\-url
+.PP
+Token server url.
+leave blank to use Amazon\[aq]s.
+.IP \[bu] 2
+Config: token_url
+.IP \[bu] 2
+Env Var: RCLONE_ACD_TOKEN_URL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-acd\-checkpoint
+.PP
+Checkpoint for internal polling (debug).
+.IP \[bu] 2
+Config: checkpoint
+.IP \[bu] 2
+Env Var: RCLONE_ACD_CHECKPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-acd\-upload\-wait\-per\-gb
+.PP
+Additional time per GB to wait after a failed complete upload to see if
+it appears.
.PP
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while.
@@ -7382,8 +7788,36 @@ the file will most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads of
big files for a range of file sizes.
.PP
-Upload with the \f[C]\-v\f[] flag to see more info about what rclone is
-doing in this situation.
+Upload with the "\-v" flag to see more info about what rclone is doing
+in this situation.
+.IP \[bu] 2
+Config: upload_wait_per_gb
+.IP \[bu] 2
+Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 3m0s
+.SS \-\-acd\-templink\-threshold
+.PP
+Files >= this size will be downloaded via their tempLink.
+.PP
+Files this size or more will be downloaded via their "tempLink".
+This is to work around a problem with Amazon Drive which blocks
+downloads of files bigger than about 10GB.
+The default for this is 9GB which shouldn\[aq]t need to be changed.
+.PP
+To download files above this threshold, rclone requests a "tempLink"
+which downloads the file through a temporary URL directly from the
+underlying S3 storage.
+.IP \[bu] 2
+Config: templink_threshold
+.IP \[bu] 2
+Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 9G
.SS Limitations
.PP
Note that Amazon Drive is case insensitive so you can\[aq]t have a file
@@ -7844,40 +8278,1160 @@ tries to access the data you will see an error like below.
In this case you need to
restore (http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
the object(s) in question before using rclone.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-s3\-acl=STRING
+Here are the standard options specific to s3 (Amazon S3 Compliant
+Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
+.SS \-\-s3\-provider
+.PP
+Choose your S3 provider.
+.IP \[bu] 2
+Config: provider
+.IP \[bu] 2
+Env Var: RCLONE_S3_PROVIDER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"AWS"
+.RS 2
+.IP \[bu] 2
+Amazon Web Services (AWS) S3
+.RE
+.IP \[bu] 2
+"Ceph"
+.RS 2
+.IP \[bu] 2
+Ceph Object Storage
+.RE
+.IP \[bu] 2
+"DigitalOcean"
+.RS 2
+.IP \[bu] 2
+Digital Ocean Spaces
+.RE
+.IP \[bu] 2
+"Dreamhost"
+.RS 2
+.IP \[bu] 2
+Dreamhost DreamObjects
+.RE
+.IP \[bu] 2
+"IBMCOS"
+.RS 2
+.IP \[bu] 2
+IBM COS S3
+.RE
+.IP \[bu] 2
+"Minio"
+.RS 2
+.IP \[bu] 2
+Minio Object Storage
+.RE
+.IP \[bu] 2
+"Wasabi"
+.RS 2
+.IP \[bu] 2
+Wasabi Object Storage
+.RE
+.IP \[bu] 2
+"Other"
+.RS 2
+.IP \[bu] 2
+Any other S3 compatible provider
+.RE
+.RE
+.SS \-\-s3\-env\-auth
+.PP
+Get AWS credentials from runtime (environment variables or EC2/ECS meta
+data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+.IP \[bu] 2
+Config: env_auth
+.IP \[bu] 2
+Env Var: RCLONE_S3_ENV_AUTH
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"false"
+.RS 2
+.IP \[bu] 2
+Enter AWS credentials in the next step
+.RE
+.IP \[bu] 2
+"true"
+.RS 2
+.IP \[bu] 2
+Get AWS credentials from the environment (env vars or IAM)
+.RE
+.RE
+.SS \-\-s3\-access\-key\-id
+.PP
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+.IP \[bu] 2
+Config: access_key_id
+.IP \[bu] 2
+Env Var: RCLONE_S3_ACCESS_KEY_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-s3\-secret\-access\-key
+.PP
+AWS Secret Access Key (password) Leave blank for anonymous access or
+runtime credentials.
+.IP \[bu] 2
+Config: secret_access_key
+.IP \[bu] 2
+Env Var: RCLONE_S3_SECRET_ACCESS_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-s3\-region
+.PP
+Region to connect to.
+.IP \[bu] 2
+Config: region
+.IP \[bu] 2
+Env Var: RCLONE_S3_REGION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"us\-east\-1"
+.RS 2
+.IP \[bu] 2
+The default endpoint \- a good choice if you are unsure.
+.IP \[bu] 2
+US Region, Northern Virginia or Pacific Northwest.
+.IP \[bu] 2
+Leave location constraint empty.
+.RE
+.IP \[bu] 2
+"us\-east\-2"
+.RS 2
+.IP \[bu] 2
+US East (Ohio) Region
+.IP \[bu] 2
+Needs location constraint us\-east\-2.
+.RE
+.IP \[bu] 2
+"us\-west\-2"
+.RS 2
+.IP \[bu] 2
+US West (Oregon) Region
+.IP \[bu] 2
+Needs location constraint us\-west\-2.
+.RE
+.IP \[bu] 2
+"us\-west\-1"
+.RS 2
+.IP \[bu] 2
+US West (Northern California) Region
+.IP \[bu] 2
+Needs location constraint us\-west\-1.
+.RE
+.IP \[bu] 2
+"ca\-central\-1"
+.RS 2
+.IP \[bu] 2
+Canada (Central) Region
+.IP \[bu] 2
+Needs location constraint ca\-central\-1.
+.RE
+.IP \[bu] 2
+"eu\-west\-1"
+.RS 2
+.IP \[bu] 2
+EU (Ireland) Region
+.IP \[bu] 2
+Needs location constraint EU or eu\-west\-1.
+.RE
+.IP \[bu] 2
+"eu\-west\-2"
+.RS 2
+.IP \[bu] 2
+EU (London) Region
+.IP \[bu] 2
+Needs location constraint eu\-west\-2.
+.RE
+.IP \[bu] 2
+"eu\-central\-1"
+.RS 2
+.IP \[bu] 2
+EU (Frankfurt) Region
+.IP \[bu] 2
+Needs location constraint eu\-central\-1.
+.RE
+.IP \[bu] 2
+"ap\-southeast\-1"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Singapore) Region
+.IP \[bu] 2
+Needs location constraint ap\-southeast\-1.
+.RE
+.IP \[bu] 2
+"ap\-southeast\-2"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Sydney) Region
+.IP \[bu] 2
+Needs location constraint ap\-southeast\-2.
+.RE
+.IP \[bu] 2
+"ap\-northeast\-1"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Tokyo) Region
+.IP \[bu] 2
+Needs location constraint ap\-northeast\-1.
+.RE
+.IP \[bu] 2
+"ap\-northeast\-2"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Seoul)
+.IP \[bu] 2
+Needs location constraint ap\-northeast\-2.
+.RE
+.IP \[bu] 2
+"ap\-south\-1"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Mumbai)
+.IP \[bu] 2
+Needs location constraint ap\-south\-1.
+.RE
+.IP \[bu] 2
+"sa\-east\-1"
+.RS 2
+.IP \[bu] 2
+South America (Sao Paulo) Region
+.IP \[bu] 2
+Needs location constraint sa\-east\-1.
+.RE
+.RE
+.SS \-\-s3\-region
+.PP
+Region to connect to.
+Leave blank if you are using an S3 clone and you don\[aq]t have a
+region.
+.IP \[bu] 2
+Config: region
+.IP \[bu] 2
+Env Var: RCLONE_S3_REGION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+""
+.RS 2
+.IP \[bu] 2
+Use this if unsure.
+Will use v4 signatures and an empty region.
+.RE
+.IP \[bu] 2
+"other\-v2\-signature"
+.RS 2
+.IP \[bu] 2
+Use this only if v4 signatures don\[aq]t work, eg pre Jewel/v10 CEPH.
+.RE
+.RE
+.SS \-\-s3\-endpoint
+.PP
+Endpoint for S3 API.
+Leave blank if using AWS to use the default endpoint for the region.
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_S3_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-s3\-endpoint
+.PP
+Endpoint for IBM COS S3 API.
+Specify if using an IBM COS On Premise.
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_S3_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"s3\-api.us\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+US Cross Region Endpoint
+.RE
+.IP \[bu] 2
+"s3\-api.dal.us\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+US Cross Region Dallas Endpoint
+.RE
+.IP \[bu] 2
+"s3\-api.wdc\-us\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+US Cross Region Washington DC Endpoint
+.RE
+.IP \[bu] 2
+"s3\-api.sjc\-us\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+US Cross Region San Jose Endpoint
+.RE
+.IP \[bu] 2
+"s3\-api.us\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+US Cross Region Private Endpoint
+.RE
+.IP \[bu] 2
+"s3\-api.dal\-us\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+US Cross Region Dallas Private Endpoint
+.RE
+.IP \[bu] 2
+"s3\-api.wdc\-us\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+US Cross Region Washington DC Private Endpoint
+.RE
+.IP \[bu] 2
+"s3\-api.sjc\-us\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+US Cross Region San Jose Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.us\-east.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+US Region East Endpoint
+.RE
+.IP \[bu] 2
+"s3.us\-east.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+US Region East Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.us\-south.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+US Region South Endpoint
+.RE
+.IP \[bu] 2
+"s3.us\-south.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+US Region South Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.eu\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Endpoint
+.RE
+.IP \[bu] 2
+"s3.fra\-eu\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Frankfurt Endpoint
+.RE
+.IP \[bu] 2
+"s3.mil\-eu\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Milan Endpoint
+.RE
+.IP \[bu] 2
+"s3.ams\-eu\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Amsterdam Endpoint
+.RE
+.IP \[bu] 2
+"s3.eu\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.fra\-eu\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Frankfurt Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.mil\-eu\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Milan Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.ams\-eu\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Amsterdam Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.eu\-gb.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+Great Britan Endpoint
+.RE
+.IP \[bu] 2
+"s3.eu\-gb.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+Great Britan Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.ap\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+APAC Cross Regional Endpoint
+.RE
+.IP \[bu] 2
+"s3.tok\-ap\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+APAC Cross Regional Tokyo Endpoint
+.RE
+.IP \[bu] 2
+"s3.hkg\-ap\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+APAC Cross Regional HongKong Endpoint
+.RE
+.IP \[bu] 2
+"s3.seo\-ap\-geo.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+APAC Cross Regional Seoul Endpoint
+.RE
+.IP \[bu] 2
+"s3.ap\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+APAC Cross Regional Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.tok\-ap\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+APAC Cross Regional Tokyo Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.hkg\-ap\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+APAC Cross Regional HongKong Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.seo\-ap\-geo.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+APAC Cross Regional Seoul Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.mel01.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+Melbourne Single Site Endpoint
+.RE
+.IP \[bu] 2
+"s3.mel01.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+Melbourne Single Site Private Endpoint
+.RE
+.IP \[bu] 2
+"s3.tor01.objectstorage.softlayer.net"
+.RS 2
+.IP \[bu] 2
+Toronto Single Site Endpoint
+.RE
+.IP \[bu] 2
+"s3.tor01.objectstorage.service.networklayer.com"
+.RS 2
+.IP \[bu] 2
+Toronto Single Site Private Endpoint
+.RE
+.RE
+.SS \-\-s3\-endpoint
+.PP
+Endpoint for S3 API.
+Required when using an S3 clone.
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_S3_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"objects\-us\-west\-1.dream.io"
+.RS 2
+.IP \[bu] 2
+Dream Objects endpoint
+.RE
+.IP \[bu] 2
+"nyc3.digitaloceanspaces.com"
+.RS 2
+.IP \[bu] 2
+Digital Ocean Spaces New York 3
+.RE
+.IP \[bu] 2
+"ams3.digitaloceanspaces.com"
+.RS 2
+.IP \[bu] 2
+Digital Ocean Spaces Amsterdam 3
+.RE
+.IP \[bu] 2
+"sgp1.digitaloceanspaces.com"
+.RS 2
+.IP \[bu] 2
+Digital Ocean Spaces Singapore 1
+.RE
+.IP \[bu] 2
+"s3.wasabisys.com"
+.RS 2
+.IP \[bu] 2
+Wasabi Object Storage
+.RE
+.RE
+.SS \-\-s3\-location\-constraint
+.PP
+Location constraint \- must be set to match the Region.
+Used when creating buckets only.
+.IP \[bu] 2
+Config: location_constraint
+.IP \[bu] 2
+Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+""
+.RS 2
+.IP \[bu] 2
+Empty for US Region, Northern Virginia or Pacific Northwest.
+.RE
+.IP \[bu] 2
+"us\-east\-2"
+.RS 2
+.IP \[bu] 2
+US East (Ohio) Region.
+.RE
+.IP \[bu] 2
+"us\-west\-2"
+.RS 2
+.IP \[bu] 2
+US West (Oregon) Region.
+.RE
+.IP \[bu] 2
+"us\-west\-1"
+.RS 2
+.IP \[bu] 2
+US West (Northern California) Region.
+.RE
+.IP \[bu] 2
+"ca\-central\-1"
+.RS 2
+.IP \[bu] 2
+Canada (Central) Region.
+.RE
+.IP \[bu] 2
+"eu\-west\-1"
+.RS 2
+.IP \[bu] 2
+EU (Ireland) Region.
+.RE
+.IP \[bu] 2
+"eu\-west\-2"
+.RS 2
+.IP \[bu] 2
+EU (London) Region.
+.RE
+.IP \[bu] 2
+"EU"
+.RS 2
+.IP \[bu] 2
+EU Region.
+.RE
+.IP \[bu] 2
+"ap\-southeast\-1"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Singapore) Region.
+.RE
+.IP \[bu] 2
+"ap\-southeast\-2"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Sydney) Region.
+.RE
+.IP \[bu] 2
+"ap\-northeast\-1"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Tokyo) Region.
+.RE
+.IP \[bu] 2
+"ap\-northeast\-2"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Seoul)
+.RE
+.IP \[bu] 2
+"ap\-south\-1"
+.RS 2
+.IP \[bu] 2
+Asia Pacific (Mumbai)
+.RE
+.IP \[bu] 2
+"sa\-east\-1"
+.RS 2
+.IP \[bu] 2
+South America (Sao Paulo) Region.
+.RE
+.RE
+.SS \-\-s3\-location\-constraint
+.PP
+Location constraint \- must match endpoint when using IBM Cloud Public.
+For on\-prem COS, do not make a selection from this list, hit enter
+.IP \[bu] 2
+Config: location_constraint
+.IP \[bu] 2
+Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"us\-standard"
+.RS 2
+.IP \[bu] 2
+US Cross Region Standard
+.RE
+.IP \[bu] 2
+"us\-vault"
+.RS 2
+.IP \[bu] 2
+US Cross Region Vault
+.RE
+.IP \[bu] 2
+"us\-cold"
+.RS 2
+.IP \[bu] 2
+US Cross Region Cold
+.RE
+.IP \[bu] 2
+"us\-flex"
+.RS 2
+.IP \[bu] 2
+US Cross Region Flex
+.RE
+.IP \[bu] 2
+"us\-east\-standard"
+.RS 2
+.IP \[bu] 2
+US East Region Standard
+.RE
+.IP \[bu] 2
+"us\-east\-vault"
+.RS 2
+.IP \[bu] 2
+US East Region Vault
+.RE
+.IP \[bu] 2
+"us\-east\-cold"
+.RS 2
+.IP \[bu] 2
+US East Region Cold
+.RE
+.IP \[bu] 2
+"us\-east\-flex"
+.RS 2
+.IP \[bu] 2
+US East Region Flex
+.RE
+.IP \[bu] 2
+"us\-south\-standard"
+.RS 2
+.IP \[bu] 2
+US Sout hRegion Standard
+.RE
+.IP \[bu] 2
+"us\-south\-vault"
+.RS 2
+.IP \[bu] 2
+US South Region Vault
+.RE
+.IP \[bu] 2
+"us\-south\-cold"
+.RS 2
+.IP \[bu] 2
+US South Region Cold
+.RE
+.IP \[bu] 2
+"us\-south\-flex"
+.RS 2
+.IP \[bu] 2
+US South Region Flex
+.RE
+.IP \[bu] 2
+"eu\-standard"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Standard
+.RE
+.IP \[bu] 2
+"eu\-vault"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Vault
+.RE
+.IP \[bu] 2
+"eu\-cold"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Cold
+.RE
+.IP \[bu] 2
+"eu\-flex"
+.RS 2
+.IP \[bu] 2
+EU Cross Region Flex
+.RE
+.IP \[bu] 2
+"eu\-gb\-standard"
+.RS 2
+.IP \[bu] 2
+Great Britan Standard
+.RE
+.IP \[bu] 2
+"eu\-gb\-vault"
+.RS 2
+.IP \[bu] 2
+Great Britan Vault
+.RE
+.IP \[bu] 2
+"eu\-gb\-cold"
+.RS 2
+.IP \[bu] 2
+Great Britan Cold
+.RE
+.IP \[bu] 2
+"eu\-gb\-flex"
+.RS 2
+.IP \[bu] 2
+Great Britan Flex
+.RE
+.IP \[bu] 2
+"ap\-standard"
+.RS 2
+.IP \[bu] 2
+APAC Standard
+.RE
+.IP \[bu] 2
+"ap\-vault"
+.RS 2
+.IP \[bu] 2
+APAC Vault
+.RE
+.IP \[bu] 2
+"ap\-cold"
+.RS 2
+.IP \[bu] 2
+APAC Cold
+.RE
+.IP \[bu] 2
+"ap\-flex"
+.RS 2
+.IP \[bu] 2
+APAC Flex
+.RE
+.IP \[bu] 2
+"mel01\-standard"
+.RS 2
+.IP \[bu] 2
+Melbourne Standard
+.RE
+.IP \[bu] 2
+"mel01\-vault"
+.RS 2
+.IP \[bu] 2
+Melbourne Vault
+.RE
+.IP \[bu] 2
+"mel01\-cold"
+.RS 2
+.IP \[bu] 2
+Melbourne Cold
+.RE
+.IP \[bu] 2
+"mel01\-flex"
+.RS 2
+.IP \[bu] 2
+Melbourne Flex
+.RE
+.IP \[bu] 2
+"tor01\-standard"
+.RS 2
+.IP \[bu] 2
+Toronto Standard
+.RE
+.IP \[bu] 2
+"tor01\-vault"
+.RS 2
+.IP \[bu] 2
+Toronto Vault
+.RE
+.IP \[bu] 2
+"tor01\-cold"
+.RS 2
+.IP \[bu] 2
+Toronto Cold
+.RE
+.IP \[bu] 2
+"tor01\-flex"
+.RS 2
+.IP \[bu] 2
+Toronto Flex
+.RE
+.RE
+.SS \-\-s3\-location\-constraint
+.PP
+Location constraint \- must be set to match the Region.
+Leave blank if not sure.
+Used when creating buckets only.
+.IP \[bu] 2
+Config: location_constraint
+.IP \[bu] 2
+Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-s3\-acl
.PP
Canned ACL used when creating buckets and/or storing objects in S3.
+For more info visit
+https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl
+.IP \[bu] 2
+Config: acl
+.IP \[bu] 2
+Env Var: RCLONE_S3_ACL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"private"
+.RS 2
+.IP \[bu] 2
+Owner gets FULL_CONTROL.
+No one else has access rights (default).
+.RE
+.IP \[bu] 2
+"public\-read"
+.RS 2
+.IP \[bu] 2
+Owner gets FULL_CONTROL.
+The AllUsers group gets READ access.
+.RE
+.IP \[bu] 2
+"public\-read\-write"
+.RS 2
+.IP \[bu] 2
+Owner gets FULL_CONTROL.
+The AllUsers group gets READ and WRITE access.
+.IP \[bu] 2
+Granting this on a bucket is generally not recommended.
+.RE
+.IP \[bu] 2
+"authenticated\-read"
+.RS 2
+.IP \[bu] 2
+Owner gets FULL_CONTROL.
+The AuthenticatedUsers group gets READ access.
+.RE
+.IP \[bu] 2
+"bucket\-owner\-read"
+.RS 2
+.IP \[bu] 2
+Object owner gets FULL_CONTROL.
+Bucket owner gets READ access.
+.IP \[bu] 2
+If you specify this canned ACL when creating a bucket, Amazon S3 ignores
+it.
+.RE
+.IP \[bu] 2
+"bucket\-owner\-full\-control"
+.RS 2
+.IP \[bu] 2
+Both the object owner and the bucket owner get FULL_CONTROL over the
+object.
+.IP \[bu] 2
+If you specify this canned ACL when creating a bucket, Amazon S3 ignores
+it.
+.RE
+.IP \[bu] 2
+"private"
+.RS 2
+.IP \[bu] 2
+Owner gets FULL_CONTROL.
+No one else has access rights (default).
+This acl is available on IBM Cloud (Infra), IBM Cloud (Storage),
+On\-Premise COS
+.RE
+.IP \[bu] 2
+"public\-read"
+.RS 2
+.IP \[bu] 2
+Owner gets FULL_CONTROL.
+The AllUsers group gets READ access.
+This acl is available on IBM Cloud (Infra), IBM Cloud (Storage),
+On\-Premise IBM COS
+.RE
+.IP \[bu] 2
+"public\-read\-write"
+.RS 2
+.IP \[bu] 2
+Owner gets FULL_CONTROL.
+The AllUsers group gets READ and WRITE access.
+This acl is available on IBM Cloud (Infra), On\-Premise IBM COS
+.RE
+.IP \[bu] 2
+"authenticated\-read"
+.RS 2
+.IP \[bu] 2
+Owner gets FULL_CONTROL.
+The AuthenticatedUsers group gets READ access.
+Not supported on Buckets.
+This acl is available on IBM Cloud (Infra) and On\-Premise IBM COS
+.RE
+.RE
+.SS \-\-s3\-server\-side\-encryption
.PP
-For more info visit the canned ACL
-docs (https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl).
-.SS \-\-s3\-storage\-class=STRING
+The server\-side encryption algorithm used when storing this object in
+S3.
+.IP \[bu] 2
+Config: server_side_encryption
+.IP \[bu] 2
+Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+""
+.RS 2
+.IP \[bu] 2
+None
+.RE
+.IP \[bu] 2
+"AES256"
+.RS 2
+.IP \[bu] 2
+AES256
+.RE
+.IP \[bu] 2
+"aws:kms"
+.RS 2
+.IP \[bu] 2
+aws:kms
+.RE
+.RE
+.SS \-\-s3\-sse\-kms\-key\-id
.PP
-Storage class to upload new objects with.
+If using KMS ID you must provide the ARN of Key.
+.IP \[bu] 2
+Config: sse_kms_key_id
+.IP \[bu] 2
+Env Var: RCLONE_S3_SSE_KMS_KEY_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+""
+.RS 2
+.IP \[bu] 2
+None
+.RE
+.IP \[bu] 2
+"arn:aws:kms:us\-east\-1:*"
+.RS 2
+.IP \[bu] 2
+arn:aws:kms:*
+.RE
+.RE
+.SS \-\-s3\-storage\-class
.PP
-Available options include:
+The storage class to use when storing new objects in S3.
.IP \[bu] 2
-STANDARD \- default storage class
+Config: storage_class
.IP \[bu] 2
-STANDARD_IA \- for less frequently accessed data (e.g backups)
+Env Var: RCLONE_S3_STORAGE_CLASS
.IP \[bu] 2
-ONEZONE_IA \- for storing data in only one Availability Zone
+Type: string
.IP \[bu] 2
-REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower
-redundancy)
-.SS \-\-s3\-chunk\-size=SIZE
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+""
+.RS 2
+.IP \[bu] 2
+Default
+.RE
+.IP \[bu] 2
+"STANDARD"
+.RS 2
+.IP \[bu] 2
+Standard storage class
+.RE
+.IP \[bu] 2
+"REDUCED_REDUNDANCY"
+.RS 2
+.IP \[bu] 2
+Reduced redundancy storage class
+.RE
+.IP \[bu] 2
+"STANDARD_IA"
+.RS 2
+.IP \[bu] 2
+Standard Infrequent Access storage class
+.RE
+.IP \[bu] 2
+"ONEZONE_IA"
+.RS 2
+.IP \[bu] 2
+One Zone Infrequent Access storage class
+.RE
+.RE
+.SS Advanced Options
+.PP
+Here are the advanced options specific to s3 (Amazon S3 Compliant
+Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
+.SS \-\-s3\-chunk\-size
+.PP
+Chunk size to use for uploading.
.PP
Any files larger than this will be uploaded in chunks of this size.
The default is 5MB.
The minimum is 5MB.
.PP
-Note that 2 chunks of this size are buffered in memory per transfer.
+Note that "\-\-s3\-upload\-concurrency" chunks of this size are buffered
+in memory per transfer.
.PP
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.
-.SS \-\-s3\-force\-path\-style=BOOL
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_S3_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 5M
+.SS \-\-s3\-disable\-checksum
+.PP
+Don\[aq]t store MD5 checksum with object metadata
+.IP \[bu] 2
+Config: disable_checksum
+.IP \[bu] 2
+Env Var: RCLONE_S3_DISABLE_CHECKSUM
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-s3\-session\-token
+.PP
+An AWS session token
+.IP \[bu] 2
+Config: session_token
+.IP \[bu] 2
+Env Var: RCLONE_S3_SESSION_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-s3\-upload\-concurrency
+.PP
+Concurrency for multipart uploads.
+.PP
+This is the number of chunks of the same file that are uploaded
+concurrently.
+.PP
+If you are uploading small numbers of large file over high speed link
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+.IP \[bu] 2
+Config: upload_concurrency
+.IP \[bu] 2
+Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 2
+.SS \-\-s3\-force\-path\-style
+.PP
+If true use path style access if false use virtual hosted style.
.PP
If this is true (the default) then rclone will use path style access, if
false then rclone will use virtual path style.
@@ -7885,17 +9439,31 @@ See the AWS S3
docs (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
.PP
-Some providers (eg Aliyun OSS or Netease COS) require this set to
-\f[C]false\f[].
-It can also be set in the config in the advanced section.
-.SS \-\-s3\-upload\-concurrency
+Some providers (eg Aliyun OSS or Netease COS) require this set to false.
+.IP \[bu] 2
+Config: force_path_style
+.IP \[bu] 2
+Env Var: RCLONE_S3_FORCE_PATH_STYLE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
+.SS \-\-s3\-v2\-auth
.PP
-Number of chunks of the same file that are uploaded concurrently.
-Default is 2.
+If true use v2 authentication.
.PP
-If you are uploading small amount of large file over high speed link and
-these uploads do not fully utilize your bandwidth, then increasing this
-may help to speed up the transfers.
+If this is false (the default) then rclone will use v4 authentication.
+If it is set then rclone will use v2 authentication.
+.PP
+Use this only if v4 signatures don\[aq]t work, eg pre Jewel/v10 CEPH.
+.IP \[bu] 2
+Config: v2_auth
+.IP \[bu] 2
+Env Var: RCLONE_S3_V2_AUTH
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS Anonymous access to public buckets
.PP
If you want to use rclone to access a public bucket, configure with a
@@ -8807,6 +10375,9 @@ the old versions of files, leaving the current ones intact.
You can also supply a path and only old versions under that path will be
deleted, eg \f[C]rclone\ cleanup\ remote:bucket/path/to/stuff\f[].
.PP
+Note that \f[C]cleanup\f[] does not remove partially uploaded files from
+the bucket.
+.PP
When you \f[C]purge\f[] a bucket, the current and the old versions will
be deleted then the bucket will be deleted.
.PP
@@ -8901,42 +10472,10 @@ start and finish the upload) and another 2 requests for each chunk:
/b2api/v1/b2_finish_large_file
\f[]
.fi
-.SS Specific options
+.SS Versions
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-b2\-chunk\-size valuee=SIZE
-.PP
-When uploading large files chunk the file into this size.
-Note that these chunks are buffered in memory and there might a maximum
-of \f[C]\-\-transfers\f[] chunks in progress at once.
-5,000,000 Bytes is the minimim size (default 96M).
-.SS \-\-b2\-upload\-cutoff=SIZE
-.PP
-Cutoff for switching to chunked upload (default 190.735 MiB == 200 MB).
-Files above this size will be uploaded in chunks of
-\f[C]\-\-b2\-chunk\-size\f[].
-.PP
-This value should be set no larger than 4.657GiB (== 5GB) as this is the
-largest file size that can be uploaded.
-.SS \-\-b2\-test\-mode=FLAG
-.PP
-This is for debugging purposes only.
-.PP
-Setting FLAG to one of the strings below will cause b2 to return
-specific errors for debugging purposes.
-.IP \[bu] 2
-\f[C]fail_some_uploads\f[]
-.IP \[bu] 2
-\f[C]expire_some_account_authorization_tokens\f[]
-.IP \[bu] 2
-\f[C]force_cap_exceeded\f[]
-.PP
-These will be set in the \f[C]X\-Bz\-Test\-Mode\f[] header which is
-documented in the b2 integrations
-checklist (https://www.backblaze.com/b2/docs/integration_checklist.html).
-.SS \-\-b2\-versions
-.PP
-When set rclone will show and act on older versions of files.
+Versions can be viewd with the \f[C]\-\-b2\-versions\f[] flag.
+When it is set rclone will show and act on older versions of files.
For example
.PP
Listing without \f[C]\-\-b2\-versions\f[]
@@ -8967,6 +10506,128 @@ nearest millisecond appended to them.
.PP
Note that when using \f[C]\-\-b2\-versions\f[] no file write operations
are permitted, so you can\[aq]t upload files or delete them.
+.SS Standard Options
+.PP
+Here are the standard options specific to b2 (Backblaze B2).
+.SS \-\-b2\-account
+.PP
+Account ID or Application Key ID
+.IP \[bu] 2
+Config: account
+.IP \[bu] 2
+Env Var: RCLONE_B2_ACCOUNT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-b2\-key
+.PP
+Application Key
+.IP \[bu] 2
+Config: key
+.IP \[bu] 2
+Env Var: RCLONE_B2_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-b2\-hard\-delete
+.PP
+Permanently delete files on remote removal, otherwise hide files.
+.IP \[bu] 2
+Config: hard_delete
+.IP \[bu] 2
+Env Var: RCLONE_B2_HARD_DELETE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS Advanced Options
+.PP
+Here are the advanced options specific to b2 (Backblaze B2).
+.SS \-\-b2\-endpoint
+.PP
+Endpoint for the service.
+Leave blank normally.
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_B2_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-b2\-test\-mode
+.PP
+A flag string for X\-Bz\-Test\-Mode header for debugging.
+.PP
+This is for debugging purposes only.
+Setting it to one of the strings below will cause b2 to return specific
+errors:
+.IP \[bu] 2
+"fail_some_uploads"
+.IP \[bu] 2
+"expire_some_account_authorization_tokens"
+.IP \[bu] 2
+"force_cap_exceeded"
+.PP
+These will be set in the "X\-Bz\-Test\-Mode" header which is documented
+in the b2 integrations
+checklist (https://www.backblaze.com/b2/docs/integration_checklist.html).
+.IP \[bu] 2
+Config: test_mode
+.IP \[bu] 2
+Env Var: RCLONE_B2_TEST_MODE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-b2\-versions
+.PP
+Include old versions in directory listings.
+Note that when using this no file write operations are permitted, so you
+can\[aq]t upload files or delete them.
+.IP \[bu] 2
+Config: versions
+.IP \[bu] 2
+Env Var: RCLONE_B2_VERSIONS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-b2\-upload\-cutoff
+.PP
+Cutoff for switching to chunked upload.
+.PP
+Files above this size will be uploaded in chunks of
+"\-\-b2\-chunk\-size".
+.PP
+This value should be set no larger than 4.657GiB (== 5GB).
+.IP \[bu] 2
+Config: upload_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_B2_UPLOAD_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 200M
+.SS \-\-b2\-chunk\-size
+.PP
+Upload chunk size.
+Must fit in memory.
+.PP
+When uploading large files, chunk the file into this size.
+Note that these chunks are buffered in memory and there might a maximum
+of "\-\-transfers" chunks in progress at once.
+5,000,000 Bytes is the minimim size.
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_B2_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 96M
.SS Box
.PP
Paths are specified as \f[C]remote:path\f[]
@@ -9205,17 +10866,57 @@ Chunks are buffered in memory and are normally 8MB so increasing
.PP
Depending on the enterprise settings for your user, the item will either
be actually deleted from Box or moved to the trash.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-box\-upload\-cutoff=SIZE
+Here are the standard options specific to box (Box).
+.SS \-\-box\-client\-id
.PP
-Cutoff for switching to chunked upload \- must be >= 50MB.
-The default is 50MB.
-.SS \-\-box\-commit\-retries int
+Box App Client Id.
+Leave blank normally.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_BOX_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-box\-client\-secret
+.PP
+Box App Client Secret Leave blank normally.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_BOX_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to box (Box).
+.SS \-\-box\-upload\-cutoff
+.PP
+Cutoff for switching to multipart upload (>= 50MB).
+.IP \[bu] 2
+Config: upload_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_BOX_UPLOAD_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 50M
+.SS \-\-box\-commit\-retries
.PP
Max number of times to try committing a multipart file.
-(default 100)
+.IP \[bu] 2
+Config: commit_retries
+.IP \[bu] 2
+Env Var: RCLONE_BOX_COMMIT_RETRIES
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 100
.SS Limitations
.PP
Note that Box is case insensitive so you can\[aq]t have a file called
@@ -9382,8 +11083,8 @@ blip can happen though)
Files are uploaded in sequence and only one file is uploaded at a time.
Uploads will be stored in a queue and be processed based on the order
they were added.
-The queue and the temporary storage is persistent across restarts and
-even purges of the cache.
+The queue and the temporary storage is persistent across restarts but
+can be cleared on startup with the \f[C]\-\-cache\-db\-purge\f[] flag.
.SS Write Support
.PP
Writes are supported through \f[C]cache\f[].
@@ -9432,6 +11133,30 @@ enabled.
.PP
Affected settings: \- \f[C]cache\-workers\f[]: \f[I]Configured value\f[]
during confirmed playback or \f[I]1\f[] all the other times
+.SS Certificate Validation
+.PP
+When the Plex server is configured to only accept secure connections, it
+is possible to use \f[C]\&.plex.direct\f[] URL\[aq]s to ensure
+certificate validation succeeds.
+These URL\[aq]s are used by Plex internally to connect to the Plex
+server securely.
+.PP
+The format for this URL\[aq]s is the following:
+.PP
+https://ip\-with\-dots\-replaced.server\-hash.plex.direct:32400/
+.PP
+The \f[C]ip\-with\-dots\-replaced\f[] part can be any IPv4 address,
+where the dots have been replaced with dashes, e.g.
+\f[C]127.0.0.1\f[] becomes \f[C]127\-0\-0\-1\f[].
+.PP
+To get the \f[C]server\-hash\f[] part, the easiest way is to visit
+.PP
+https://plex.tv/api/resources?includeHttps=1&X\-Plex\-Token=your\-plex\-token
+.PP
+This page will list all the available Plex servers for your account with
+at least one \f[C]\&.plex.direct\f[] link for each.
+Copy one URL and replace the IP address with the desired address.
+This can be used as the \f[C]plex_url\f[] value.
.SS Known issues
.SS Mount and \-\-dir\-cache\-time
.PP
@@ -9503,6 +11228,24 @@ provider which makes it think we\[aq]re downloading the full file
instead of small chunks.
Organizing the remotes in this order yelds better results: \f[B]cloud
remote\f[] \-> \f[B]cache\f[] \-> \f[B]crypt\f[]
+.SS absolute remote paths
+.PP
+\f[C]cache\f[] can not differentiate between relative and absolute paths
+for the wrapped remote.
+Any path given in the \f[C]remote\f[] config setting and on the command
+line will be passed to the wrapped remote as is, but for storing the
+chunks on disk the path will be made relative by removing any leading
+\f[C]/\f[] character.
+.PP
+This behavior is irrelevant for most backend types, but there are
+backends where a leading \f[C]/\f[] changes the effective directory,
+e.g.
+in the \f[C]sftp\f[] backend paths starting with a \f[C]/\f[] are
+relative to the root of the SSH server and paths without are relative to
+the user home directory.
+As a result \f[C]sftp:bin\f[] and \f[C]sftp:/bin\f[] will share the same
+cache folder, even if they represent a different directory on the SSH
+server.
.SS Cache and Remote Control (\-\-rc)
.PP
Cache supports the new \f[C]\-\-rc\f[] mode in rclone and can be remote
@@ -9518,84 +11261,270 @@ wrapped by crypt.
Params: \- \f[B]remote\f[] = path to remote \f[B](required)\f[] \-
\f[B]withData\f[] = true/false to delete cached data (chunks) as well
\f[I](optional, false by default)\f[]
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-cache\-db\-path=PATH
+Here are the standard options specific to cache (Cache a remote).
+.SS \-\-cache\-remote
.PP
-Path to where the file structure metadata (DB) is stored locally.
-The remote name is used as the DB file name.
+Remote to cache.
+Normally should contain a \[aq]:\[aq] and a path, eg
+"myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not
+recommended).
+.IP \[bu] 2
+Config: remote
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_REMOTE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-cache\-plex\-url
.PP
-\f[B]Default\f[]: /cache\-backend/ \f[B]Example\f[]:
-/.cache/cache\-backend/test\-cache
-.SS \-\-cache\-chunk\-path=PATH
+The URL of the Plex server
+.IP \[bu] 2
+Config: plex_url
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_PLEX_URL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-cache\-plex\-username
.PP
-Path to where partial file data (chunks) is stored locally.
-The remote name is appended to the final path.
+The username of the Plex user
+.IP \[bu] 2
+Config: plex_username
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_PLEX_USERNAME
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-cache\-plex\-password
.PP
-This config follows the \f[C]\-\-cache\-db\-path\f[].
-If you specify a custom location for \f[C]\-\-cache\-db\-path\f[] and
-don\[aq]t specify one for \f[C]\-\-cache\-chunk\-path\f[] then
-\f[C]\-\-cache\-chunk\-path\f[] will use the same path as
-\f[C]\-\-cache\-db\-path\f[].
-.PP
-\f[B]Default\f[]: /cache\-backend/ \f[B]Example\f[]:
-/.cache/cache\-backend/test\-cache
-.SS \-\-cache\-db\-purge
-.PP
-Flag to clear all the cached data for this remote before.
-.PP
-\f[B]Default\f[]: not set
-.SS \-\-cache\-chunk\-size=SIZE
+The password of the Plex user
+.IP \[bu] 2
+Config: plex_password
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_PLEX_PASSWORD
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-cache\-chunk\-size
.PP
The size of a chunk (partial file data).
+.PP
Use lower numbers for slower connections.
If the chunk size is changed, any downloaded chunks will be invalid and
cache\-chunk\-path will need to be cleared or unexpected EOF errors will
occur.
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 5M
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"1m"
+.RS 2
+.IP \[bu] 2
+1MB
+.RE
+.IP \[bu] 2
+"5M"
+.RS 2
+.IP \[bu] 2
+5 MB
+.RE
+.IP \[bu] 2
+"10M"
+.RS 2
+.IP \[bu] 2
+10 MB
+.RE
+.RE
+.SS \-\-cache\-info\-age
.PP
-\f[B]Default\f[]: 5M
-.SS \-\-cache\-total\-chunk\-size=SIZE
+How long to cache file structure information (directory listings, file
+size, times etc).
+If all write operations are done through the cache then you can safely
+make this value very large as the cache store will also be updated in
+real time.
+.IP \[bu] 2
+Config: info_age
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_INFO_AGE
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 6h0m0s
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"1h"
+.RS 2
+.IP \[bu] 2
+1 hour
+.RE
+.IP \[bu] 2
+"24h"
+.RS 2
+.IP \[bu] 2
+24 hours
+.RE
+.IP \[bu] 2
+"48h"
+.RS 2
+.IP \[bu] 2
+48 hours
+.RE
+.RE
+.SS \-\-cache\-chunk\-total\-size
.PP
The total size that the chunks can take up on the local disk.
-If \f[C]cache\f[] exceeds this value then it will start to the delete
-the oldest chunks until it goes under this value.
.PP
-\f[B]Default\f[]: 10G
-.SS \-\-cache\-chunk\-clean\-interval=DURATION
+If the cache exceeds this value then it will start to delete the oldest
+chunks until it goes under this value.
+.IP \[bu] 2
+Config: chunk_total_size
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 10G
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"500M"
+.RS 2
+.IP \[bu] 2
+500 MB
+.RE
+.IP \[bu] 2
+"1G"
+.RS 2
+.IP \[bu] 2
+1 GB
+.RE
+.IP \[bu] 2
+"10G"
+.RS 2
+.IP \[bu] 2
+10 GB
+.RE
+.RE
+.SS Advanced Options
.PP
-How often should \f[C]cache\f[] perform cleanups of the chunk storage.
+Here are the advanced options specific to cache (Cache a remote).
+.SS \-\-cache\-plex\-token
+.PP
+The plex token for authentication \- auto set normally
+.IP \[bu] 2
+Config: plex_token
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_PLEX_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-cache\-plex\-insecure
+.PP
+Skip all certificate verifications when connecting to the Plex server
+.IP \[bu] 2
+Config: plex_insecure
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_PLEX_INSECURE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-cache\-db\-path
+.PP
+Directory to store file structure metadata DB.
+The remote name is used as the DB file name.
+.IP \[bu] 2
+Config: db_path
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_DB_PATH
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: "/home/ncw/.cache/rclone/cache\-backend"
+.SS \-\-cache\-chunk\-path
+.PP
+Directory to cache chunk files.
+.PP
+Path to where partial file data (chunks) are stored locally.
+The remote name is appended to the final path.
+.PP
+This config follows the "\-\-cache\-db\-path".
+If you specify a custom location for "\-\-cache\-db\-path" and don\[aq]t
+specify one for "\-\-cache\-chunk\-path" then "\-\-cache\-chunk\-path"
+will use the same path as "\-\-cache\-db\-path".
+.IP \[bu] 2
+Config: chunk_path
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_CHUNK_PATH
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: "/home/ncw/.cache/rclone/cache\-backend"
+.SS \-\-cache\-db\-purge
+.PP
+Clear all the cached data for this remote on start.
+.IP \[bu] 2
+Config: db_purge
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_DB_PURGE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-cache\-chunk\-clean\-interval
+.PP
+How often should the cache perform cleanups of the chunk storage.
The default value should be ok for most people.
-If you find that \f[C]cache\f[] goes over
-\f[C]cache\-total\-chunk\-size\f[] too often then try to lower this
-value to force it to perform cleanups more often.
-.PP
-\f[B]Default\f[]: 1m
-.SS \-\-cache\-info\-age=DURATION
-.PP
-How long to keep file structure information (directory listings, file
-size, mod times etc) locally.
-.PP
-If all write operations are done through \f[C]cache\f[] then you can
-safely make this value very large as the cache store will also be
-updated in real time.
-.PP
-\f[B]Default\f[]: 6h
-.SS \-\-cache\-read\-retries=RETRIES
+If you find that the cache goes over "cache\-chunk\-total\-size" too
+often then try to lower this value to force it to perform cleanups more
+often.
+.IP \[bu] 2
+Config: chunk_clean_interval
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 1m0s
+.SS \-\-cache\-read\-retries
.PP
How many times to retry a read from a cache storage.
.PP
-Since reading from a \f[C]cache\f[] stream is independent from
-downloading file data, readers can get to a point where there\[aq]s no
-more data in the cache.
-Most of the times this can indicate a connectivity issue if
-\f[C]cache\f[] isn\[aq]t able to provide file data anymore.
+Since reading from a cache stream is independent from downloading file
+data, readers can get to a point where there\[aq]s no more data in the
+cache.
+Most of the times this can indicate a connectivity issue if cache
+isn\[aq]t able to provide file data anymore.
.PP
For really slow connections, increase this to a point where the stream
is able to provide data but your experience will be very stuttering.
-.PP
-\f[B]Default\f[]: 10
-.SS \-\-cache\-workers=WORKERS
+.IP \[bu] 2
+Config: read_retries
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_READ_RETRIES
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 10
+.SS \-\-cache\-workers
.PP
How many workers should run in parallel to download chunks.
.PP
@@ -9609,28 +11538,46 @@ to readers.
\f[B]Note\f[]: If the optional Plex integration is enabled then this
setting will adapt to the type of reading performed and the value
specified here will be used as a maximum number of workers to use.
-\f[B]Default\f[]: 4
+.IP \[bu] 2
+Config: workers
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_WORKERS
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 4
.SS \-\-cache\-chunk\-no\-memory
.PP
-By default, \f[C]cache\f[] will keep file data during streaming in RAM
-as well to provide it to readers as fast as possible.
+Disable the in\-memory cache for storing chunks during streaming.
+.PP
+By default, cache will keep file data during streaming in RAM as well to
+provide it to readers as fast as possible.
.PP
This transient data is evicted as soon as it is read and the number of
chunks stored doesn\[aq]t exceed the number of workers.
-However, depending on other settings like \f[C]cache\-chunk\-size\f[]
-and \f[C]cache\-workers\f[] this footprint can increase if there are
-parallel streams too (multiple files being read at the same time).
+However, depending on other settings like "cache\-chunk\-size" and
+"cache\-workers" this footprint can increase if there are parallel
+streams too (multiple files being read at the same time).
.PP
If the hardware permits it, use this feature to provide an overall
better performance during streaming but it can also be disabled if RAM
is not available on the local machine.
+.IP \[bu] 2
+Config: chunk_no_memory
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-cache\-rps
.PP
-\f[B]Default\f[]: not set
-.SS \-\-cache\-rps=NUMBER
+Limits the number of requests per second to the source FS (\-1 to
+disable)
.PP
This setting places a hard limit on the number of requests per second
-that \f[C]cache\f[] will be doing to the cloud provider remote and try
-to respect that value by setting waits between reads.
+that cache will be doing to the cloud provider remote and try to respect
+that value by setting waits between reads.
.PP
If you find that you\[aq]re getting banned or limited on the cloud
provider through cache and know that a smaller number of requests per
@@ -9643,43 +11590,81 @@ useless but it is available to set for more special cases.
\f[B]NOTE\f[]: This will limit the number of requests during streams but
other API calls to the cloud provider like directory listings will still
pass.
-.PP
-\f[B]Default\f[]: disabled
+.IP \[bu] 2
+Config: rps
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_RPS
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: \-1
.SS \-\-cache\-writes
.PP
+Cache file data on writes through the FS
+.PP
If you need to read files immediately after you upload them through
-\f[C]cache\f[] you can enable this flag to have their data stored in the
-cache store at the same time during upload.
+cache you can enable this flag to have their data stored in the cache
+store at the same time during upload.
+.IP \[bu] 2
+Config: writes
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_WRITES
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-cache\-tmp\-upload\-path
.PP
-\f[B]Default\f[]: not set
-.SS \-\-cache\-tmp\-upload\-path=PATH
+Directory to keep temporary files until they are uploaded.
.PP
-This is the path where \f[C]cache\f[] will use as a temporary storage
-for new files that need to be uploaded to the cloud provider.
+This is the path where cache will use as a temporary storage for new
+files that need to be uploaded to the cloud provider.
.PP
Specifying a value will enable this feature.
Without it, it is completely disabled and files will be uploaded
directly to the cloud provider
+.IP \[bu] 2
+Config: tmp_upload_path
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-cache\-tmp\-wait\-time
.PP
-\f[B]Default\f[]: empty
-.SS \-\-cache\-tmp\-wait\-time=DURATION
+How long should files be stored in local cache before being uploaded
.PP
This is the duration that a file must wait in the temporary location
\f[I]cache\-tmp\-upload\-path\f[] before it is selected for upload.
.PP
Note that only one file is uploaded at a time and it can take longer to
start the upload if a queue formed for this purpose.
+.IP \[bu] 2
+Config: tmp_wait_time
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_TMP_WAIT_TIME
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 15s
+.SS \-\-cache\-db\-wait\-time
.PP
-\f[B]Default\f[]: 15m
-.SS \-\-cache\-db\-wait\-time=DURATION
+How long to wait for the DB to be available \- 0 is unlimited
.PP
Only one process can have the DB open at any one time, so rclone waits
for this duration for the DB to become available before it gives an
error.
.PP
If you set it to 0 then it will wait forever.
-.PP
-\f[B]Default\f[]: 1s
+.IP \[bu] 2
+Config: db_wait_time
+.IP \[bu] 2
+Env Var: RCLONE_CACHE_DB_WAIT_TIME
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 1s
.SS Crypt
.PP
The \f[C]crypt\f[] remote encrypts and decrypts another remote.
@@ -9997,11 +11982,117 @@ authenticator.
Note that you should use the \f[C]rclone\ cryptcheck\f[] command to
check the integrity of a crypted remote instead of
\f[C]rclone\ check\f[] which can\[aq]t check the checksums properly.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to crypt (Encrypt/Decrypt a
+remote).
+.SS \-\-crypt\-remote
+.PP
+Remote to encrypt/decrypt.
+Normally should contain a \[aq]:\[aq] and a path, eg
+"myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not
+recommended).
+.IP \[bu] 2
+Config: remote
+.IP \[bu] 2
+Env Var: RCLONE_CRYPT_REMOTE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-crypt\-filename\-encryption
+.PP
+How to encrypt the filenames.
+.IP \[bu] 2
+Config: filename_encryption
+.IP \[bu] 2
+Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: "standard"
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"off"
+.RS 2
+.IP \[bu] 2
+Don\[aq]t encrypt the file names.
+Adds a ".bin" extension only.
+.RE
+.IP \[bu] 2
+"standard"
+.RS 2
+.IP \[bu] 2
+Encrypt the filenames see the docs for the details.
+.RE
+.IP \[bu] 2
+"obfuscate"
+.RS 2
+.IP \[bu] 2
+Very simple filename obfuscation.
+.RE
+.RE
+.SS \-\-crypt\-directory\-name\-encryption
+.PP
+Option to either encrypt directory names or leave them intact.
+.IP \[bu] 2
+Config: directory_name_encryption
+.IP \[bu] 2
+Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"true"
+.RS 2
+.IP \[bu] 2
+Encrypt directory names.
+.RE
+.IP \[bu] 2
+"false"
+.RS 2
+.IP \[bu] 2
+Don\[aq]t encrypt directory names, leave them intact.
+.RE
+.RE
+.SS \-\-crypt\-password
+.PP
+Password or pass phrase for encryption.
+.IP \[bu] 2
+Config: password
+.IP \[bu] 2
+Env Var: RCLONE_CRYPT_PASSWORD
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-crypt\-password2
+.PP
+Password or pass phrase for salt.
+Optional but recommended.
+Should be different to the previous password.
+.IP \[bu] 2
+Config: password2
+.IP \[bu] 2
+Env Var: RCLONE_CRYPT_PASSWORD2
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to crypt (Encrypt/Decrypt a
+remote).
.SS \-\-crypt\-show\-mapping
.PP
+For all files listed show how the names encrypt.
+.PP
If this flag is set then for each file that the remote is asked to list,
it will log (at level INFO) a line stating the decrypted file name and
the encrypted file name.
@@ -10009,6 +12100,14 @@ the encrypted file name.
This is so you can work out which encrypted names are which decrypted
names just in case you need to do something with the encrypted file
names, or for debugging purposes.
+.IP \[bu] 2
+Config: show_mapping
+.IP \[bu] 2
+Env Var: RCLONE_CRYPT_SHOW_MAPPING
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS Backing up a crypted remote
.PP
If you wish to backup a crypted remote, it it recommended that you use
@@ -10282,20 +12381,54 @@ If you don\[aq]t want this to happen use \f[C]\-\-size\-only\f[] or
Dropbox supports its own hash
type (https://www.dropbox.com/developers/reference/content-hash) which
is checked for all transfers.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-dropbox\-chunk\-size=SIZE
+Here are the standard options specific to dropbox (Dropbox).
+.SS \-\-dropbox\-client\-id
+.PP
+Dropbox App Client Id Leave blank normally.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-dropbox\-client\-secret
+.PP
+Dropbox App Client Secret Leave blank normally.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to dropbox (Dropbox).
+.SS \-\-dropbox\-chunk\-size
+.PP
+Upload chunk size.
+(< 150M).
.PP
Any files larger than this will be uploaded in chunks of this size.
-The default is 48MB.
-The maximum is 150MB.
.PP
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries.
Setting this larger will increase the speed slightly (at most 10% for
128MB in tests) at the cost of using more memory.
It can be set smaller if you are tight on memory.
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_DROPBOX_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 48M
.SS Limitations
.PP
Note that Dropbox is case insensitive so you can\[aq]t have a file
@@ -10450,6 +12583,63 @@ Any times you see on the server will be time of upload.
.SS Checksums
.PP
FTP does not support any checksums.
+.SS Standard Options
+.PP
+Here are the standard options specific to ftp (FTP Connection).
+.SS \-\-ftp\-host
+.PP
+FTP host to connect to
+.IP \[bu] 2
+Config: host
+.IP \[bu] 2
+Env Var: RCLONE_FTP_HOST
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"ftp.example.com"
+.RS 2
+.IP \[bu] 2
+Connect to ftp.example.com
+.RE
+.RE
+.SS \-\-ftp\-user
+.PP
+FTP username, leave blank for current username, ncw
+.IP \[bu] 2
+Config: user
+.IP \[bu] 2
+Env Var: RCLONE_FTP_USER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-ftp\-port
+.PP
+FTP port, leave blank to use default (21)
+.IP \[bu] 2
+Config: port
+.IP \[bu] 2
+Env Var: RCLONE_FTP_PORT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-ftp\-pass
+.PP
+FTP password
+.IP \[bu] 2
+Config: pass
+.IP \[bu] 2
+Env Var: RCLONE_FTP_PASS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS Limitations
.PP
Note that since FTP isn\[aq]t HTTP based the following flags don\[aq]t
@@ -10710,6 +12900,320 @@ See the rclone docs (/docs/#fast-list) for more details.
Google google cloud storage stores md5sums natively and rclone stores
modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns.
+.SS Standard Options
+.PP
+Here are the standard options specific to google cloud storage (Google
+Cloud Storage (this is not Google Drive)).
+.SS \-\-gcs\-client\-id
+.PP
+Google Application Client Id Leave blank normally.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_GCS_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-gcs\-client\-secret
+.PP
+Google Application Client Secret Leave blank normally.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_GCS_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-gcs\-project\-number
+.PP
+Project number.
+Optional \- needed only for list/create/delete buckets \- see your
+developer console.
+.IP \[bu] 2
+Config: project_number
+.IP \[bu] 2
+Env Var: RCLONE_GCS_PROJECT_NUMBER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-gcs\-service\-account\-file
+.PP
+Service Account Credentials JSON file path Leave blank normally.
+Needed only if you want use SA instead of interactive login.
+.IP \[bu] 2
+Config: service_account_file
+.IP \[bu] 2
+Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-gcs\-service\-account\-credentials
+.PP
+Service Account Credentials JSON blob Leave blank normally.
+Needed only if you want use SA instead of interactive login.
+.IP \[bu] 2
+Config: service_account_credentials
+.IP \[bu] 2
+Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-gcs\-object\-acl
+.PP
+Access Control List for new objects.
+.IP \[bu] 2
+Config: object_acl
+.IP \[bu] 2
+Env Var: RCLONE_GCS_OBJECT_ACL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"authenticatedRead"
+.RS 2
+.IP \[bu] 2
+Object owner gets OWNER access, and all Authenticated Users get READER
+access.
+.RE
+.IP \[bu] 2
+"bucketOwnerFullControl"
+.RS 2
+.IP \[bu] 2
+Object owner gets OWNER access, and project team owners get OWNER
+access.
+.RE
+.IP \[bu] 2
+"bucketOwnerRead"
+.RS 2
+.IP \[bu] 2
+Object owner gets OWNER access, and project team owners get READER
+access.
+.RE
+.IP \[bu] 2
+"private"
+.RS 2
+.IP \[bu] 2
+Object owner gets OWNER access [default if left blank].
+.RE
+.IP \[bu] 2
+"projectPrivate"
+.RS 2
+.IP \[bu] 2
+Object owner gets OWNER access, and project team members get access
+according to their roles.
+.RE
+.IP \[bu] 2
+"publicRead"
+.RS 2
+.IP \[bu] 2
+Object owner gets OWNER access, and all Users get READER access.
+.RE
+.RE
+.SS \-\-gcs\-bucket\-acl
+.PP
+Access Control List for new buckets.
+.IP \[bu] 2
+Config: bucket_acl
+.IP \[bu] 2
+Env Var: RCLONE_GCS_BUCKET_ACL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"authenticatedRead"
+.RS 2
+.IP \[bu] 2
+Project team owners get OWNER access, and all Authenticated Users get
+READER access.
+.RE
+.IP \[bu] 2
+"private"
+.RS 2
+.IP \[bu] 2
+Project team owners get OWNER access [default if left blank].
+.RE
+.IP \[bu] 2
+"projectPrivate"
+.RS 2
+.IP \[bu] 2
+Project team members get access according to their roles.
+.RE
+.IP \[bu] 2
+"publicRead"
+.RS 2
+.IP \[bu] 2
+Project team owners get OWNER access, and all Users get READER access.
+.RE
+.IP \[bu] 2
+"publicReadWrite"
+.RS 2
+.IP \[bu] 2
+Project team owners get OWNER access, and all Users get WRITER access.
+.RE
+.RE
+.SS \-\-gcs\-location
+.PP
+Location for the newly created buckets.
+.IP \[bu] 2
+Config: location
+.IP \[bu] 2
+Env Var: RCLONE_GCS_LOCATION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+""
+.RS 2
+.IP \[bu] 2
+Empty for default location (US).
+.RE
+.IP \[bu] 2
+"asia"
+.RS 2
+.IP \[bu] 2
+Multi\-regional location for Asia.
+.RE
+.IP \[bu] 2
+"eu"
+.RS 2
+.IP \[bu] 2
+Multi\-regional location for Europe.
+.RE
+.IP \[bu] 2
+"us"
+.RS 2
+.IP \[bu] 2
+Multi\-regional location for United States.
+.RE
+.IP \[bu] 2
+"asia\-east1"
+.RS 2
+.IP \[bu] 2
+Taiwan.
+.RE
+.IP \[bu] 2
+"asia\-northeast1"
+.RS 2
+.IP \[bu] 2
+Tokyo.
+.RE
+.IP \[bu] 2
+"asia\-southeast1"
+.RS 2
+.IP \[bu] 2
+Singapore.
+.RE
+.IP \[bu] 2
+"australia\-southeast1"
+.RS 2
+.IP \[bu] 2
+Sydney.
+.RE
+.IP \[bu] 2
+"europe\-west1"
+.RS 2
+.IP \[bu] 2
+Belgium.
+.RE
+.IP \[bu] 2
+"europe\-west2"
+.RS 2
+.IP \[bu] 2
+London.
+.RE
+.IP \[bu] 2
+"us\-central1"
+.RS 2
+.IP \[bu] 2
+Iowa.
+.RE
+.IP \[bu] 2
+"us\-east1"
+.RS 2
+.IP \[bu] 2
+South Carolina.
+.RE
+.IP \[bu] 2
+"us\-east4"
+.RS 2
+.IP \[bu] 2
+Northern Virginia.
+.RE
+.IP \[bu] 2
+"us\-west1"
+.RS 2
+.IP \[bu] 2
+Oregon.
+.RE
+.RE
+.SS \-\-gcs\-storage\-class
+.PP
+The storage class to use when storing objects in Google Cloud Storage.
+.IP \[bu] 2
+Config: storage_class
+.IP \[bu] 2
+Env Var: RCLONE_GCS_STORAGE_CLASS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+""
+.RS 2
+.IP \[bu] 2
+Default
+.RE
+.IP \[bu] 2
+"MULTI_REGIONAL"
+.RS 2
+.IP \[bu] 2
+Multi\-regional storage class
+.RE
+.IP \[bu] 2
+"REGIONAL"
+.RS 2
+.IP \[bu] 2
+Regional storage class
+.RE
+.IP \[bu] 2
+"NEARLINE"
+.RS 2
+.IP \[bu] 2
+Nearline storage class
+.RE
+.IP \[bu] 2
+"COLDLINE"
+.RS 2
+.IP \[bu] 2
+Coldline storage class
+.RE
+.IP \[bu] 2
+"DURABLE_REDUCED_AVAILABILITY"
+.RS 2
+.IP \[bu] 2
+Durable reduced availability storage class
+.RE
+.RE
.SS Google Drive
.PP
Paths are specified as \f[C]drive:path\f[]
@@ -11143,37 +13647,14 @@ To view your current quota you can use the
limit (quota), the usage in Google Drive, the size of all files in the
Trash and the space used by other Google services such as Gmail.
This command does not take any path arguments.
-.SS Specific options
+.SS Import/Export of google documents
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-drive\-acknowledge\-abuse
+Google documents can be exported from and uploaded to Google Drive.
.PP
-If downloading a file returns the error
-\f[C]This\ file\ has\ been\ identified\ as\ malware\ or\ spam\ and\ cannot\ be\ downloaded\f[]
-with the error code \f[C]cannotDownloadAbusiveFile\f[] then supply this
-flag to rclone to indicate you acknowledge the risks of downloading the
-file and rclone will download it anyway.
-.SS \-\-drive\-auth\-owner\-only
-.PP
-Only consider files owned by the authenticated user.
-.SS \-\-drive\-chunk\-size=SIZE
-.PP
-Upload chunk size.
-Must a power of 2 >= 256k.
-Default value is 8 MB.
-.PP
-Making this larger will improve performance, but note that each chunk is
-buffered in memory one per transfer.
-.PP
-Reducing this will reduce memory usage but decrease performance.
-.SS \-\-drive\-formats
-.PP
-Google documents can only be exported from Google drive.
When rclone downloads a Google doc it chooses a format to download
-depending upon this setting.
-.PP
-By default the formats are \f[C]docx,xlsx,pptx,svg\f[] which are a
-sensible default for an editable document.
+depending upon the \f[C]\-\-drive\-export\-formats\f[] setting.
+By default the export formats are \f[C]docx,xlsx,pptx,svg\f[] which are
+a sensible default for an editable document.
.PP
When choosing a format, rclone runs down the list provided in order and
chooses the first file format the doc can be exported as from the list.
@@ -11181,15 +13662,139 @@ If the file can\[aq]t be exported to a format on the formats list, then
rclone will choose a format from the default list.
.PP
If you prefer an archive copy then you might use
-\f[C]\-\-drive\-formats\ pdf\f[], or if you prefer
+\f[C]\-\-drive\-export\-formats\ pdf\f[], or if you prefer
openoffice/libreoffice formats you might use
-\f[C]\-\-drive\-formats\ ods,odt,odp\f[].
+\f[C]\-\-drive\-export\-formats\ ods,odt,odp\f[].
.PP
Note that rclone adds the extension to the google doc, so if it is
calles \f[C]My\ Spreadsheet\f[] on google docs, it will be exported as
\f[C]My\ Spreadsheet.xlsx\f[] or \f[C]My\ Spreadsheet.pdf\f[] etc.
.PP
-Here are the possible extensions with their corresponding mime types.
+When importing files into Google Drive, rclone will conververt all files
+with an extension in \f[C]\-\-drive\-import\-formats\f[] to their
+associated document type.
+rclone will not convert any files by default, since the conversion is
+lossy process.
+.PP
+The conversion must result in a file with the same extension when the
+\f[C]\-\-drive\-export\-formats\f[] rules are applied to the uploded
+document.
+.PP
+Here are some examples for allowed and prohibited conversions.
+.PP
+.TS
+tab(@);
+l l l l l.
+T{
+export\-formats
+T}@T{
+import\-formats
+T}@T{
+Upload Ext
+T}@T{
+Document Ext
+T}@T{
+Allowed
+T}
+_
+T{
+odt
+T}@T{
+odt
+T}@T{
+odt
+T}@T{
+odt
+T}@T{
+Yes
+T}
+T{
+odt
+T}@T{
+docx,odt
+T}@T{
+odt
+T}@T{
+odt
+T}@T{
+Yes
+T}
+T{
+T}@T{
+docx
+T}@T{
+docx
+T}@T{
+docx
+T}@T{
+Yes
+T}
+T{
+T}@T{
+odt
+T}@T{
+odt
+T}@T{
+docx
+T}@T{
+No
+T}
+T{
+odt,docx
+T}@T{
+docx,odt
+T}@T{
+docx
+T}@T{
+odt
+T}@T{
+No
+T}
+T{
+docx,odt
+T}@T{
+docx,odt
+T}@T{
+docx
+T}@T{
+docx
+T}@T{
+Yes
+T}
+T{
+docx,odt
+T}@T{
+docx,odt
+T}@T{
+odt
+T}@T{
+docx
+T}@T{
+No
+T}
+.TE
+.PP
+This limitation can be disabled by specifying
+\f[C]\-\-drive\-allow\-import\-name\-change\f[].
+When using this flag, rclone can convert multiple files types resulting
+in the same document type at once, eg with
+\f[C]\-\-drive\-import\-formats\ docx,odt,txt\f[], all files having
+these extension would result in a doument represented as a docx file.
+This brings the additional risk of overwriting a document, if multiple
+files have the same stem.
+Many rclone operations will not handle this name change in any way.
+They assume an equal name when copying files and might copy the file
+again or delete them when the name changes.
+.PP
+Here are the possible export extensions with their corresponding mime
+types.
+Most of these can also be used for importing, but there more that are
+not listed here.
+Some of these additional ones might only be available when the operating
+system provides the correct MIME type entries.
+.PP
+This list can be changed by Google Drive at any time and might not
+represent the currently available converions.
.PP
.TS
tab(@);
@@ -11210,13 +13815,6 @@ T}@T{
Standard CSV format for Spreadsheets
T}
T{
-doc
-T}@T{
-application/msword
-T}@T{
-Micosoft Office Document
-T}
-T{
docx
T}@T{
application/vnd.openxmlformats\-officedocument.wordprocessingml.document
@@ -11245,6 +13843,13 @@ T}@T{
A JPEG Image File
T}
T{
+json
+T}@T{
+application/vnd.google\-apps.script+json
+T}@T{
+JSON Text Format
+T}
+T{
odp
T}@T{
application/vnd.oasis.opendocument.presentation
@@ -11322,13 +13927,6 @@ T}@T{
Plain Text
T}
T{
-xls
-T}@T{
-application/vnd.ms\-excel
-T}@T{
-Microsoft Office Spreadsheet
-T}
-T{
xlsx
T}@T{
application/vnd.openxmlformats\-officedocument.spreadsheetml.sheet
@@ -11343,8 +13941,351 @@ T}@T{
A ZIP file of HTML, Images CSS
T}
.TE
+.PP
+Google douments can also be exported as link files.
+These files will open a browser window for the Google Docs website of
+that dument when opened.
+The link file extension has to be specified as a
+\f[C]\-\-drive\-export\-formats\f[] parameter.
+They will match all available Google Documents.
+.PP
+.TS
+tab(@);
+l l l.
+T{
+Extension
+T}@T{
+Description
+T}@T{
+OS Support
+T}
+_
+T{
+desktop
+T}@T{
+freedesktop.org specified desktop entry
+T}@T{
+Linux
+T}
+T{
+link.html
+T}@T{
+An HTML Document with a redirect
+T}@T{
+All
+T}
+T{
+url
+T}@T{
+INI style link file
+T}@T{
+macOS, Windows
+T}
+T{
+webloc
+T}@T{
+macOS specific XML format
+T}@T{
+macOS
+T}
+.TE
+.SS Standard Options
+.PP
+Here are the standard options specific to drive (Google Drive).
+.SS \-\-drive\-client\-id
+.PP
+Google Application Client Id Leave blank normally.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-drive\-client\-secret
+.PP
+Google Application Client Secret Leave blank normally.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-drive\-scope
+.PP
+Scope that rclone should use when requesting access from drive.
+.IP \[bu] 2
+Config: scope
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_SCOPE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"drive"
+.RS 2
+.IP \[bu] 2
+Full access all files, excluding Application Data Folder.
+.RE
+.IP \[bu] 2
+"drive.readonly"
+.RS 2
+.IP \[bu] 2
+Read\-only access to file metadata and file contents.
+.RE
+.IP \[bu] 2
+"drive.file"
+.RS 2
+.IP \[bu] 2
+Access to files created by rclone only.
+.IP \[bu] 2
+These are visible in the drive website.
+.IP \[bu] 2
+File authorization is revoked when the user deauthorizes the app.
+.RE
+.IP \[bu] 2
+"drive.appfolder"
+.RS 2
+.IP \[bu] 2
+Allows read and write access to the Application Data folder.
+.IP \[bu] 2
+This is not visible in the drive website.
+.RE
+.IP \[bu] 2
+"drive.metadata.readonly"
+.RS 2
+.IP \[bu] 2
+Allows read\-only access to file metadata but
+.IP \[bu] 2
+does not allow any access to read or download file content.
+.RE
+.RE
+.SS \-\-drive\-root\-folder\-id
+.PP
+ID of the root folder Leave blank normally.
+Fill in to access "Computers" folders.
+(see docs).
+.IP \[bu] 2
+Config: root_folder_id
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-drive\-service\-account\-file
+.PP
+Service Account Credentials JSON file path Leave blank normally.
+Needed only if you want use SA instead of interactive login.
+.IP \[bu] 2
+Config: service_account_file
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to drive (Google Drive).
+.SS \-\-drive\-service\-account\-credentials
+.PP
+Service Account Credentials JSON blob Leave blank normally.
+Needed only if you want use SA instead of interactive login.
+.IP \[bu] 2
+Config: service_account_credentials
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-drive\-team\-drive
+.PP
+ID of the Team Drive
+.IP \[bu] 2
+Config: team_drive
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_TEAM_DRIVE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-drive\-auth\-owner\-only
+.PP
+Only consider files owned by the authenticated user.
+.IP \[bu] 2
+Config: auth_owner_only
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-drive\-use\-trash
+.PP
+Send files to the trash instead of deleting permanently.
+Defaults to true, namely sending files to the trash.
+Use \f[C]\-\-drive\-use\-trash=false\f[] to delete files permanently
+instead.
+.IP \[bu] 2
+Config: use_trash
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_USE_TRASH
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
+.SS \-\-drive\-skip\-gdocs
+.PP
+Skip google documents in all listings.
+If given, gdocs practically become invisible to rclone.
+.IP \[bu] 2
+Config: skip_gdocs
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_SKIP_GDOCS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-drive\-shared\-with\-me
+.PP
+Only show files that are shared with me.
+.PP
+Instructs rclone to operate on your "Shared with me" folder (where
+Google Drive lets you access the files and folders others have shared
+with you).
+.PP
+This works both with the "list" (lsd, lsl, etc) and the "copy" commands
+(copy, sync, etc), and with all other commands too.
+.IP \[bu] 2
+Config: shared_with_me
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_SHARED_WITH_ME
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-drive\-trashed\-only
+.PP
+Only show files that are in the trash.
+This will show trashed files in their original directory structure.
+.IP \[bu] 2
+Config: trashed_only
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_TRASHED_ONLY
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-drive\-formats
+.PP
+Deprecated: see export_formats
+.IP \[bu] 2
+Config: formats
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_FORMATS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-drive\-export\-formats
+.PP
+Comma separated list of preferred formats for downloading Google docs.
+.IP \[bu] 2
+Config: export_formats
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_EXPORT_FORMATS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: "docx,xlsx,pptx,svg"
+.SS \-\-drive\-import\-formats
+.PP
+Comma separated list of preferred formats for uploading Google docs.
+.IP \[bu] 2
+Config: import_formats
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_IMPORT_FORMATS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-drive\-allow\-import\-name\-change
+.PP
+Allow the filetype to change when uploading Google docs (e.g.
+file.doc to file.docx).
+This will confuse sync and reupload every time.
+.IP \[bu] 2
+Config: allow_import_name_change
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-drive\-use\-created\-date
+.PP
+Use file created date instead of modified date.,
+.PP
+Useful when downloading data and you want the creation date used in
+place of the last modified date.
+.PP
+\f[B]WARNING\f[]: This flag may have some unexpected consequences.
+.PP
+When uploading to your drive all files will be overwritten unless they
+haven\[aq]t been modified since their creation.
+And the inverse will occur while downloading.
+This side effect can be avoided by using the "\-\-checksum" flag.
+.PP
+This feature was implemented to retain photos capture date as recorded
+by google photos.
+You will first need to check the "Create a Google Photos folder" option
+in your google drive settings.
+You can then copy or move the photos locally and use the date the image
+was taken (created) set as the modification date.
+.IP \[bu] 2
+Config: use_created_date
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_USE_CREATED_DATE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-drive\-list\-chunk
+.PP
+Size of listing chunk 100\-1000.
+0 to disable.
+.IP \[bu] 2
+Config: list_chunk
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_LIST_CHUNK
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 1000
+.SS \-\-drive\-impersonate
+.PP
+Impersonate this user when using a service account.
+.IP \[bu] 2
+Config: impersonate
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_IMPERSONATE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS \-\-drive\-alternate\-export
.PP
+Use alternate export URLs for google documents export.,
+.PP
If this option is set this instructs rclone to use an alternate set of
export URLs for drive documents.
Users have reported that the official export URLs can\[aq]t export large
@@ -11354,65 +14295,82 @@ See rclone issue #2243 (https://github.com/ncw/rclone/issues/2243) for
background, this google drive
issue (https://issuetracker.google.com/issues/36761333) and this helpful
post (https://www.labnol.org/internet/direct-links-for-google-drive/28356/).
-.SS \-\-drive\-impersonate user
+.IP \[bu] 2
+Config: alternate_export
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-drive\-upload\-cutoff
.PP
-When using a service account, this instructs rclone to impersonate the
-user passed in.
+Cutoff for switching to chunked upload
+.IP \[bu] 2
+Config: upload_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 8M
+.SS \-\-drive\-chunk\-size
+.PP
+Upload chunk size.
+Must a power of 2 >= 256k.
+.PP
+Making this larger will improve performance, but note that each chunk is
+buffered in memory one per transfer.
+.PP
+Reducing this will reduce memory usage but decrease performance.
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 8M
+.SS \-\-drive\-acknowledge\-abuse
+.PP
+Set to allow files which return cannotDownloadAbusiveFile to be
+downloaded.
+.PP
+If downloading a file returns the error "This file has been identified
+as malware or spam and cannot be downloaded" with the error code
+"cannotDownloadAbusiveFile" then supply this flag to rclone to indicate
+you acknowledge the risks of downloading the file and rclone will
+download it anyway.
+.IP \[bu] 2
+Config: acknowledge_abuse
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS \-\-drive\-keep\-revision\-forever
.PP
-Keeps new head revision of the file forever.
-.SS \-\-drive\-list\-chunk int
+Keep new head revision of each file forever.
+.IP \[bu] 2
+Config: keep_revision_forever
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-drive\-v2\-download\-min\-size
.PP
-Size of listing chunk 100\-1000.
-0 to disable.
-(default 1000)
-.SS \-\-drive\-shared\-with\-me
-.PP
-Instructs rclone to operate on your "Shared with me" folder (where
-Google Drive lets you access the files and folders others have shared
-with you).
-.PP
-This works both with the "list" (lsd, lsl, etc) and the "copy" commands
-(copy, sync, etc), and with all other commands too.
-.SS \-\-drive\-skip\-gdocs
-.PP
-Skip google documents in all listings.
-If given, gdocs practically become invisible to rclone.
-.SS \-\-drive\-trashed\-only
-.PP
-Only show files that are in the trash.
-This will show trashed files in their original directory structure.
-.SS \-\-drive\-upload\-cutoff=SIZE
-.PP
-File size cutoff for switching to chunked upload.
-Default is 8 MB.
-.SS \-\-drive\-use\-trash
-.PP
-Controls whether files are sent to the trash or deleted permanently.
-Defaults to true, namely sending files to the trash.
-Use \f[C]\-\-drive\-use\-trash=false\f[] to delete files permanently
-instead.
-.SS \-\-drive\-use\-created\-date
-.PP
-Use the file creation date in place of the modification date.
-Defaults to false.
-.PP
-Useful when downloading data and you want the creation date used in
-place of the last modified date.
-.PP
-\f[B]WARNING\f[]: This flag may have some unexpected consequences.
-.PP
-When uploading to your drive all files will be overwritten unless they
-haven\[aq]t been modified since their creation.
-And the inverse will occur while downloading.
-This side effect can be avoided by using the \f[C]\-\-checksum\f[] flag.
-.PP
-This feature was implemented to retain photos capture date as recorded
-by google photos.
-You will first need to check the "Create a Google Photos folder" option
-in your google drive settings.
-You can then copy or move the photos locally and use the date the image
-was taken (created) set as the modification date.
+If Object\[aq]s are greater, use drive v2 API to download.
+.IP \[bu] 2
+Config: v2_download_min_size
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: off
.SS Limitations
.PP
Drive has quite a lot of rate limiting.
@@ -11642,6 +14600,30 @@ without a config file:
rclone\ lsd\ \-\-http\-url\ https://beta.rclone.org\ :http:
\f[]
.fi
+.SS Standard Options
+.PP
+Here are the standard options specific to http (http Connection).
+.SS \-\-http\-url
+.PP
+URL of http host to connect to
+.IP \[bu] 2
+Config: url
+.IP \[bu] 2
+Env Var: RCLONE_HTTP_URL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"https://example.com"
+.RS 2
+.IP \[bu] 2
+Connect to example.com
+.RE
+.RE
.SS Hubic
.PP
Paths are specified as \f[C]remote:path\f[]
@@ -11789,6 +14771,48 @@ amongst others) for storing the modification time for an object.
.PP
Note that Hubic wraps the Swift backend, so most of the properties of
are the same.
+.SS Standard Options
+.PP
+Here are the standard options specific to hubic (Hubic).
+.SS \-\-hubic\-client\-id
+.PP
+Hubic Client Id Leave blank normally.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_HUBIC_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-hubic\-client\-secret
+.PP
+Hubic Client Secret Leave blank normally.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_HUBIC_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to hubic (Hubic).
+.SS \-\-hubic\-chunk\-size
+.PP
+Above this size files will be chunked into a _segments container.
+.PP
+Above this size files will be chunked into a _segments container.
+The default for this is 5GB which is its maximum value.
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_HUBIC_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 5G
.SS Limitations
.PP
This uses the normal OpenStack Swift mechanism to refresh the Swift API
@@ -11894,6 +14918,15 @@ To copy a local directory to an Jottacloud directory called backup
rclone\ copy\ /home/source\ remote:backup
\f[]
.fi
+.SS \-\-fast\-list
+.PP
+This remote supports \f[C]\-\-fast\-list\f[] which allows you to use
+fewer transactions in exchange for more memory.
+See the rclone docs (/docs/#fast-list) for more details.
+.PP
+Note that the implementation in Jottacloud always uses only a single API
+request to get the entire list, so for large folders this could lead to
+long wait time before the first results are shown.
.SS Modified time and hashes
.PP
Jottacloud allows modification times to be set on objects accurate to 1
@@ -11911,9 +14944,12 @@ Small files will be cached in memory \- see the
\f[C]\-\-jottacloud\-md5\-memory\-limit\f[] flag.
.SS Deleting files
.PP
-Any files you delete with rclone will end up in the trash.
+By default rclone will send all files to the trash when deleting files.
Due to a lack of API documentation emptying the trash is currently only
possible via the Jottacloud website.
+If deleting permanently is required then use the
+\f[C]\-\-jottacloud\-hard\-delete\f[] flag, or set the equivalent
+environment variable.
.SS Versions
.PP
Jottacloud supports file versioning.
@@ -11921,6 +14957,103 @@ When rclone uploads a new version of a file it creates a new version of
it.
Currently rclone only supports retrieving the current version but older
versions can be accessed via the Jottacloud Website.
+.SS Quota information
+.PP
+To view your current quota you can use the
+\f[C]rclone\ about\ remote:\f[] command which will display your usage
+limit (unless it is unlimited) and the current usage.
+.SS Standard Options
+.PP
+Here are the standard options specific to jottacloud (JottaCloud).
+.SS \-\-jottacloud\-user
+.PP
+User Name
+.IP \[bu] 2
+Config: user
+.IP \[bu] 2
+Env Var: RCLONE_JOTTACLOUD_USER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-jottacloud\-pass
+.PP
+Password.
+.IP \[bu] 2
+Config: pass
+.IP \[bu] 2
+Env Var: RCLONE_JOTTACLOUD_PASS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-jottacloud\-mountpoint
+.PP
+The mountpoint to use.
+.IP \[bu] 2
+Config: mountpoint
+.IP \[bu] 2
+Env Var: RCLONE_JOTTACLOUD_MOUNTPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"Sync"
+.RS 2
+.IP \[bu] 2
+Will be synced by the official client.
+.RE
+.IP \[bu] 2
+"Archive"
+.RS 2
+.IP \[bu] 2
+Archive
+.RE
+.RE
+.SS Advanced Options
+.PP
+Here are the advanced options specific to jottacloud (JottaCloud).
+.SS \-\-jottacloud\-md5\-memory\-limit
+.PP
+Files bigger than this will be cached on disk to calculate the MD5 if
+required.
+.IP \[bu] 2
+Config: md5_memory_limit
+.IP \[bu] 2
+Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 10M
+.SS \-\-jottacloud\-hard\-delete
+.PP
+Delete files permanently rather than putting them into the trash.
+.IP \[bu] 2
+Config: hard_delete
+.IP \[bu] 2
+Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-jottacloud\-unlink
+.PP
+Remove existing public link to file/folder with link command rather than
+creating.
+Default is false, meaning link command will create or retrieve public
+link.
+.IP \[bu] 2
+Config: unlink
+.IP \[bu] 2
+Env Var: RCLONE_JOTTACLOUD_UNLINK
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS Limitations
.PP
Note that Jottacloud is case insensitive so you can\[aq]t have a file
@@ -11934,14 +15067,6 @@ For example if a file has a ?
in it will be mapped to ? instead.
.PP
Jottacloud only supports filenames up to 255 characters in length.
-.SS Specific options
-.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-jottacloud\-md5\-memory\-limit SizeSuffix
-.PP
-Files bigger than this will be cached on disk to calculate the MD5 if
-required.
-(default 10M)
.SS Troubleshooting
.PP
Jottacloud exhibits some inconsistent behaviours regarding deleted files
@@ -12056,19 +15181,63 @@ Duplicated files cause problems with the syncing and you will see
messages in the log about duplicates.
.PP
Use \f[C]rclone\ dedupe\f[] to fix duplicated files.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
+Here are the standard options specific to mega (Mega).
+.SS \-\-mega\-user
+.PP
+User name
+.IP \[bu] 2
+Config: user
+.IP \[bu] 2
+Env Var: RCLONE_MEGA_USER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-mega\-pass
+.PP
+Password.
+.IP \[bu] 2
+Config: pass
+.IP \[bu] 2
+Env Var: RCLONE_MEGA_PASS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to mega (Mega).
.SS \-\-mega\-debug
.PP
-If this flag is set (along with \f[C]\-vv\f[]) it will print further
-debugging information from the mega backend.
+Output more debug from Mega.
+.PP
+If this flag is set (along with \-vv) it will print further debugging
+information from the mega backend.
+.IP \[bu] 2
+Config: debug
+.IP \[bu] 2
+Env Var: RCLONE_MEGA_DEBUG
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS \-\-mega\-hard\-delete
.PP
+Delete files permanently rather than putting them into the trash.
+.PP
Normally the mega backend will put all deletions into the trash rather
than permanently deleting them.
-If you specify this flag (or set it in the advanced config) then rclone
-will permanently delete objects instead.
+If you specify this then rclone will permanently delete objects instead.
+.IP \[bu] 2
+Config: hard_delete
+.IP \[bu] 2
+Env Var: RCLONE_MEGA_HARD_DELETE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS Limitations
.PP
This backend uses the go\-mega go
@@ -12277,33 +15446,129 @@ Note that rclone doesn\[aq]t commit the block list until the end of the
upload which means that there is a limit of 9.5TB of multipart uploads
in progress as Azure won\[aq]t allow more than that amount of
uncommitted blocks.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-azureblob\-upload\-cutoff=SIZE
+Here are the standard options specific to azureblob (Microsoft Azure
+Blob Storage).
+.SS \-\-azureblob\-account
.PP
-Cutoff for switching to chunked upload \- must be <= 256MB.
-The default is 256MB.
-.SS \-\-azureblob\-chunk\-size=SIZE
+Storage Account Name (leave blank to use connection string or SAS URL)
+.IP \[bu] 2
+Config: account
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_ACCOUNT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-azureblob\-key
+.PP
+Storage Account Key (leave blank to use connection string or SAS URL)
+.IP \[bu] 2
+Config: key
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-azureblob\-sas\-url
+.PP
+SAS URL for container level access only (leave blank if using
+account/key or connection string)
+.IP \[bu] 2
+Config: sas_url
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_SAS_URL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to azureblob (Microsoft Azure
+Blob Storage).
+.SS \-\-azureblob\-endpoint
+.PP
+Endpoint for the service Leave blank normally.
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-azureblob\-upload\-cutoff
+.PP
+Cutoff for switching to chunked upload (<= 256MB).
+.IP \[bu] 2
+Config: upload_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 256M
+.SS \-\-azureblob\-chunk\-size
+.PP
+Upload chunk size (<= 100MB).
.PP
-Upload chunk size.
-Default 4MB.
Note that this is stored in memory and there may be up to
-\f[C]\-\-transfers\f[] chunks stored at once in memory.
-This can be at most 100MB.
-.SS \-\-azureblob\-access\-tier=Hot/Cool/Archive
+"\-\-transfers" chunks stored at once in memory.
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 4M
+.SS \-\-azureblob\-list\-chunk
.PP
-Azure storage supports blob tiering, you can configure tier in advanced
-settings or supply flag while performing data transfer operations.
-If there is no \f[C]access\ tier\f[] specified, rclone doesn\[aq]t apply
-any tier.
-rclone performs \f[C]Set\ Tier\f[] operation on blobs while uploading,
-if objects are not modified, specifying \f[C]access\ tier\f[] to new one
-will have no effect.
-If blobs are in \f[C]archive\ tier\f[] at remote, trying to perform data
+Size of blob list.
+.PP
+This sets the number of blobs requested in each listing chunk.
+Default is the maximum, 5000.
+"List blobs" requests are permitted 2 minutes per megabyte to complete.
+If an operation is taking longer than 2 minutes per megabyte on average,
+it will time out (
+source (https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval)
+).
+This can be used to limit the number of blobs items to return, to avoid
+the time out.
+.IP \[bu] 2
+Config: list_chunk
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 5000
+.SS \-\-azureblob\-access\-tier
+.PP
+Access tier of blob: hot, cool or archive.
+.PP
+Archived blobs can be restored by setting access tier to hot or cool.
+Leave blank if you intend to use default access tier, which is set at
+account level
+.PP
+If there is no "access tier" specified, rclone doesn\[aq]t apply any
+tier.
+rclone performs "Set Tier" operation on blobs while uploading, if
+objects are not modified, specifying "access tier" to new one will have
+no effect.
+If blobs are in "archive tier" at remote, trying to perform data
transfer operations from remote will not be allowed.
-User should first restore by tiering blob to \f[C]Hot\f[] or
-\f[C]Cool\f[].
+User should first restore by tiering blob to "Hot" or "Cool".
+.IP \[bu] 2
+Config: access_tier
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS Limitations
.PP
MD5 sums are only uploaded with chunked files if the source has an MD5
@@ -12333,51 +15598,36 @@ This will guide you through an interactive setup process:
.IP
.nf
\f[C]
-No\ remotes\ found\ \-\ make\ a\ new\ one
+e)\ Edit\ existing\ remote
n)\ New\ remote
+d)\ Delete\ remote
+r)\ Rename\ remote
+c)\ Copy\ remote
s)\ Set\ configuration\ password
-n/s>\ n
+q)\ Quit\ config
+e/n/d/r/c/s/q>\ n
name>\ remote
Type\ of\ storage\ to\ configure.
+Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ ("").
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
-\ 1\ /\ Amazon\ Drive
-\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
-\ \ \ \\\ "s3"
-\ 3\ /\ Backblaze\ B2
-\ \ \ \\\ "b2"
-\ 4\ /\ Dropbox
-\ \ \ \\\ "dropbox"
-\ 5\ /\ Encrypt/Decrypt\ a\ remote
-\ \ \ \\\ "crypt"
-\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
-\ \ \ \\\ "google\ cloud\ storage"
-\ 7\ /\ Google\ Drive
-\ \ \ \\\ "drive"
-\ 8\ /\ Hubic
-\ \ \ \\\ "hubic"
-\ 9\ /\ Local\ Disk
-\ \ \ \\\ "local"
-10\ /\ Microsoft\ OneDrive
+\&...
+17\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
-\ \ \ \\\ "swift"
-12\ /\ SSH/SFTP\ Connection
-\ \ \ \\\ "sftp"
-13\ /\ Yandex\ Disk
-\ \ \ \\\ "yandex"
-Storage>\ 10
-Microsoft\ App\ Client\ Id\ \-\ leave\ blank\ normally.
+\&...
+Storage>\ 17
+Microsoft\ App\ Client\ Id
+Leave\ blank\ normally.
+Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ ("").
client_id>
-Microsoft\ App\ Client\ Secret\ \-\ leave\ blank\ normally.
+Microsoft\ App\ Client\ Secret
+Leave\ blank\ normally.
+Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ ("").
client_secret>
+Edit\ advanced\ config?\ (y/n)
+y)\ Yes
+n)\ No
+y/n>\ n
Remote\ config
-Choose\ OneDrive\ account\ type?
-\ *\ Say\ b\ for\ a\ OneDrive\ business\ account
-\ *\ Say\ p\ for\ a\ personal\ OneDrive\ account
-b)\ Business
-p)\ Personal
-b/p>\ p
Use\ auto\ config?
\ *\ Say\ Y\ if\ not\ sure
\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine
@@ -12388,11 +15638,32 @@ If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ lin
Log\ in\ and\ authorize\ rclone\ for\ access
Waiting\ for\ code...
Got\ code
+Choose\ a\ number\ from\ below,\ or\ type\ in\ an\ existing\ value
+\ 1\ /\ OneDrive\ Personal\ or\ Business
+\ \ \ \\\ "onedrive"
+\ 2\ /\ Sharepoint\ site
+\ \ \ \\\ "sharepoint"
+\ 3\ /\ Type\ in\ driveID
+\ \ \ \\\ "driveid"
+\ 4\ /\ Type\ in\ SiteID
+\ \ \ \\\ "siteid"
+\ 5\ /\ Search\ a\ Sharepoint\ site
+\ \ \ \\\ "search"
+Your\ choice>\ 1
+Found\ 1\ drives,\ please\ select\ the\ one\ you\ want\ to\ use:
+0:\ OneDrive\ (business)\ id=b!Eqwertyuiopasdfghjklzxcvbnm\-7mnbvcxzlkjhgfdsapoiuytrewqk
+Chose\ drive\ to\ use:>\ 0
+Found\ drive\ \[aq]root\[aq]\ of\ type\ \[aq]business\[aq],\ URL:\ https://org\-my.sharepoint.com/personal/you/Documents
+Is\ that\ okay?
+y)\ Yes
+n)\ No
+y/n>\ y
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
-client_id\ =
-client_secret\ =
-token\ =\ {"access_token":"XXXXXX"}
+type\ =\ onedrive
+token\ =\ {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018\-08\-26T22:39:52.486512262+08:00"}
+drive_id\ =\ b!Eqwertyuiopasdfghjklzxcvbnm\-7mnbvcxzlkjhgfdsapoiuytrewqk
+drive_type\ =\ business
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
@@ -12436,26 +15707,41 @@ To copy a local directory to an OneDrive directory called backup
rclone\ copy\ /home/source\ remote:backup
\f[]
.fi
-.SS OneDrive for Business
+.SS Getting your own Client ID and Key
.PP
-There is additional support for OneDrive for Business.
-Select "b" when ask
-.IP
-.nf
-\f[C]
-Choose\ OneDrive\ account\ type?
-\ *\ Say\ b\ for\ a\ OneDrive\ business\ account
-\ *\ Say\ p\ for\ a\ personal\ OneDrive\ account
-b)\ Business
-p)\ Personal
-b/p>
-\f[]
-.fi
+rclone uses a pair of Client ID and Key shared by all rclone users when
+performing requests by default.
+If you are having problems with them (E.g., seeing a lot of throttling),
+you can get your own Client ID and Key by following the steps below:
+.IP "1." 3
+Open https://apps.dev.microsoft.com/#/appList, then click
+\f[C]Add\ an\ app\f[] (Choose \f[C]Converged\ applications\f[] if
+applicable)
+.IP "2." 3
+Enter a name for your app, and click continue.
+Copy and keep the \f[C]Application\ Id\f[] under the app name for later
+use.
+.IP "3." 3
+Under section \f[C]Application\ Secrets\f[], click
+\f[C]Generate\ New\ Password\f[].
+Copy and keep that password for later use.
+.IP "4." 3
+Under section \f[C]Platforms\f[], click \f[C]Add\ platform\f[], then
+\f[C]Web\f[].
+Enter \f[C]http://localhost:53682/\f[] in \f[C]Redirect\ URLs\f[].
+.IP "5." 3
+Under section \f[C]Microsoft\ Graph\ Permissions\f[], \f[C]Add\f[] these
+\f[C]delegated\ permissions\f[]: \f[C]Files.Read\f[],
+\f[C]Files.ReadWrite\f[], \f[C]Files.Read.All\f[],
+\f[C]Files.ReadWrite.All\f[], \f[C]offline_access\f[],
+\f[C]User.Read\f[].
+.IP "6." 3
+Scroll to the bottom and click \f[C]Save\f[].
.PP
-After that rclone requires an authentication of your account.
-The application will first authenticate your account, then query the
-OneDrive resource URL and do a second (silent) authentication for this
-resource URL.
+Now the application is complete.
+Run \f[C]rclone\ config\f[] to create or edit a OneDrive remote.
+Supply the app ID and password as Client ID and Secret, respectively.
+rclone will walk you through the remaining steps.
.SS Modified time and hashes
.PP
OneDrive allows modification times to be set on objects accurate to 1
@@ -12473,14 +15759,87 @@ Any files you delete with rclone will end up in the trash.
Microsoft doesn\[aq]t provide an API to permanently delete files, nor to
empty the trash, so you will have to do that with one of Microsoft\[aq]s
apps or via the OneDrive website.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-onedrive\-chunk\-size=SIZE
+Here are the standard options specific to onedrive (Microsoft OneDrive).
+.SS \-\-onedrive\-client\-id
+.PP
+Microsoft App Client Id Leave blank normally.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-onedrive\-client\-secret
+.PP
+Microsoft App Client Secret Leave blank normally.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS Advanced Options
+.PP
+Here are the advanced options specific to onedrive (Microsoft OneDrive).
+.SS \-\-onedrive\-chunk\-size
+.PP
+Chunk size to upload files with \- must be multiple of 320k.
.PP
Above this size files will be chunked \- must be multiple of 320k.
-The default is 10MB.
Note that the chunks will be buffered into memory.
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 10M
+.SS \-\-onedrive\-drive\-id
+.PP
+The ID of the drive to use
+.IP \[bu] 2
+Config: drive_id
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_DRIVE_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-onedrive\-drive\-type
+.PP
+The type of the drive ( personal | business | documentLibrary )
+.IP \[bu] 2
+Config: drive_type
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-onedrive\-expose\-onenote\-files
+.PP
+Set to make OneNote files show up in directory listings.
+.PP
+By default rclone will hide OneNote files in directory listings because
+operations like "Open" and "Update" won\[aq]t work on them.
+But this behaviour may also prevent you from deleting them.
+If you want to delete OneNote files or otherwise want them to show up in
+directory listing, set this option.
+.IP \[bu] 2
+Config: expose_onenote_files
+.IP \[bu] 2
+Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS Limitations
.PP
Note that OneDrive is case insensitive so you can\[aq]t have a file
@@ -12656,14 +16015,31 @@ rclone\ copy\ /home/source\ remote:backup
OpenDrive allows modification times to be set on objects accurate to 1
second.
These will be used to detect whether objects need syncing or not.
-.SS Deleting files
+.SS Standard Options
.PP
-Any files you delete with rclone will end up in the trash.
-Amazon don\[aq]t provide an API to permanently delete files, nor to
-empty the trash, so you will have to do that with one of Amazon\[aq]s
-apps or via the OpenDrive website.
-As of November 17, 2016, files are automatically deleted by Amazon from
-the trash after 30 days.
+Here are the standard options specific to opendrive (OpenDrive).
+.SS \-\-opendrive\-username
+.PP
+Username
+.IP \[bu] 2
+Config: username
+.IP \[bu] 2
+Env Var: RCLONE_OPENDRIVE_USERNAME
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-opendrive\-password
+.PP
+Password.
+.IP \[bu] 2
+Config: password
+.IP \[bu] 2
+Env Var: RCLONE_OPENDRIVE_PASSWORD
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS Limitations
.PP
Note that OpenDrive is case insensitive so you can\[aq]t have a file
@@ -12856,6 +16232,129 @@ Access Key ID: \f[C]QS_ACCESS_KEY_ID\f[] or \f[C]QS_ACCESS_KEY\f[]
Secret Access Key: \f[C]QS_SECRET_ACCESS_KEY\f[] or
\f[C]QS_SECRET_KEY\f[]
.RE
+.SS Standard Options
+.PP
+Here are the standard options specific to qingstor (QingCloud Object
+Storage).
+.SS \-\-qingstor\-env\-auth
+.PP
+Get QingStor credentials from runtime.
+Only applies if access_key_id and secret_access_key is blank.
+.IP \[bu] 2
+Config: env_auth
+.IP \[bu] 2
+Env Var: RCLONE_QINGSTOR_ENV_AUTH
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"false"
+.RS 2
+.IP \[bu] 2
+Enter QingStor credentials in the next step
+.RE
+.IP \[bu] 2
+"true"
+.RS 2
+.IP \[bu] 2
+Get QingStor credentials from the environment (env vars or IAM)
+.RE
+.RE
+.SS \-\-qingstor\-access\-key\-id
+.PP
+QingStor Access Key ID Leave blank for anonymous access or runtime
+credentials.
+.IP \[bu] 2
+Config: access_key_id
+.IP \[bu] 2
+Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-qingstor\-secret\-access\-key
+.PP
+QingStor Secret Access Key (password) Leave blank for anonymous access
+or runtime credentials.
+.IP \[bu] 2
+Config: secret_access_key
+.IP \[bu] 2
+Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-qingstor\-endpoint
+.PP
+Enter a endpoint URL to connection QingStor API.
+Leave blank will use the default value "https://qingstor.com:443"
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_QINGSTOR_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-qingstor\-zone
+.PP
+Zone to connect to.
+Default is "pek3a".
+.IP \[bu] 2
+Config: zone
+.IP \[bu] 2
+Env Var: RCLONE_QINGSTOR_ZONE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"pek3a"
+.RS 2
+.IP \[bu] 2
+The Beijing (China) Three Zone
+.IP \[bu] 2
+Needs location constraint pek3a.
+.RE
+.IP \[bu] 2
+"sh1a"
+.RS 2
+.IP \[bu] 2
+The Shanghai (China) First Zone
+.IP \[bu] 2
+Needs location constraint sh1a.
+.RE
+.IP \[bu] 2
+"gd2a"
+.RS 2
+.IP \[bu] 2
+The Guangdong (China) Second Zone
+.IP \[bu] 2
+Needs location constraint gd2a.
+.RE
+.RE
+.SS Advanced Options
+.PP
+Here are the advanced options specific to qingstor (QingCloud Object
+Storage).
+.SS \-\-qingstor\-connection\-retries
+.PP
+Number of connnection retries.
+.IP \[bu] 2
+Config: connection_retries
+.IP \[bu] 2
+Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 3
.SS Swift
.PP
Swift refers to Openstack Object
@@ -13148,19 +16647,304 @@ By using \f[C]\-\-update\f[] along with
\f[C]\-\-use\-server\-modtime\f[], you can avoid the extra API call and
simply upload files whose local modtime is newer than the time it was
last uploaded.
-.SS Specific options
+.SS Standard Options
.PP
-Here are the command line options specific to this cloud storage system.
-.SS \-\-swift\-storage\-policy=STRING
+Here are the standard options specific to swift (Openstack Swift
+(Rackspace Cloud Files, Memset Memstore, OVH)).
+.SS \-\-swift\-env\-auth
.PP
-Apply the specified storage policy when creating a new container.
+Get swift credentials from environment variables in standard OpenStack
+form.
+.IP \[bu] 2
+Config: env_auth
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_ENV_AUTH
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"false"
+.RS 2
+.IP \[bu] 2
+Enter swift credentials in the next step
+.RE
+.IP \[bu] 2
+"true"
+.RS 2
+.IP \[bu] 2
+Get swift credentials from environment vars.
+Leave other fields blank if using this.
+.RE
+.RE
+.SS \-\-swift\-user
+.PP
+User name to log in (OS_USERNAME).
+.IP \[bu] 2
+Config: user
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_USER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-key
+.PP
+API key or password (OS_PASSWORD).
+.IP \[bu] 2
+Config: key
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-auth
+.PP
+Authentication URL for server (OS_AUTH_URL).
+.IP \[bu] 2
+Config: auth
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_AUTH
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"https://auth.api.rackspacecloud.com/v1.0"
+.RS 2
+.IP \[bu] 2
+Rackspace US
+.RE
+.IP \[bu] 2
+"https://lon.auth.api.rackspacecloud.com/v1.0"
+.RS 2
+.IP \[bu] 2
+Rackspace UK
+.RE
+.IP \[bu] 2
+"https://identity.api.rackspacecloud.com/v2.0"
+.RS 2
+.IP \[bu] 2
+Rackspace v2
+.RE
+.IP \[bu] 2
+"https://auth.storage.memset.com/v1.0"
+.RS 2
+.IP \[bu] 2
+Memset Memstore UK
+.RE
+.IP \[bu] 2
+"https://auth.storage.memset.com/v2.0"
+.RS 2
+.IP \[bu] 2
+Memset Memstore UK v2
+.RE
+.IP \[bu] 2
+"https://auth.cloud.ovh.net/v2.0"
+.RS 2
+.IP \[bu] 2
+OVH
+.RE
+.RE
+.SS \-\-swift\-user\-id
+.PP
+User ID to log in \- optional \- most swift systems use user and leave
+this blank (v3 auth) (OS_USER_ID).
+.IP \[bu] 2
+Config: user_id
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_USER_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-domain
+.PP
+User domain \- optional (v3 auth) (OS_USER_DOMAIN_NAME)
+.IP \[bu] 2
+Config: domain
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_DOMAIN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-tenant
+.PP
+Tenant name \- optional for v1 auth, this or tenant_id required
+otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+.IP \[bu] 2
+Config: tenant
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_TENANT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-tenant\-id
+.PP
+Tenant ID \- optional for v1 auth, this or tenant required otherwise
+(OS_TENANT_ID)
+.IP \[bu] 2
+Config: tenant_id
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_TENANT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-tenant\-domain
+.PP
+Tenant domain \- optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+.IP \[bu] 2
+Config: tenant_domain
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_TENANT_DOMAIN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-region
+.PP
+Region name \- optional (OS_REGION_NAME)
+.IP \[bu] 2
+Config: region
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_REGION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-storage\-url
+.PP
+Storage URL \- optional (OS_STORAGE_URL)
+.IP \[bu] 2
+Config: storage_url
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_STORAGE_URL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-auth\-token
+.PP
+Auth Token from alternate authentication \- optional (OS_AUTH_TOKEN)
+.IP \[bu] 2
+Config: auth_token
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_AUTH_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-swift\-auth\-version
+.PP
+AuthVersion \- optional \- set to (1,2,3) if your auth URL has no
+version (ST_AUTH_VERSION)
+.IP \[bu] 2
+Config: auth_version
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_AUTH_VERSION
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 0
+.SS \-\-swift\-endpoint\-type
+.PP
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
+.IP \[bu] 2
+Config: endpoint_type
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: "public"
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"public"
+.RS 2
+.IP \[bu] 2
+Public (default, choose this if not sure)
+.RE
+.IP \[bu] 2
+"internal"
+.RS 2
+.IP \[bu] 2
+Internal (use internal service net)
+.RE
+.IP \[bu] 2
+"admin"
+.RS 2
+.IP \[bu] 2
+Admin
+.RE
+.RE
+.SS \-\-swift\-storage\-policy
+.PP
+The storage policy to use when creating a new container
+.PP
+This applies the specified storage policy when creating a new container.
The policy cannot be changed afterwards.
The allowed configuration values and their meaning depend on your Swift
storage provider.
-.SS \-\-swift\-chunk\-size=SIZE
+.IP \[bu] 2
+Config: storage_policy
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_STORAGE_POLICY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+""
+.RS 2
+.IP \[bu] 2
+Default
+.RE
+.IP \[bu] 2
+"pcs"
+.RS 2
+.IP \[bu] 2
+OVH Public Cloud Storage
+.RE
+.IP \[bu] 2
+"pca"
+.RS 2
+.IP \[bu] 2
+OVH Public Cloud Archive
+.RE
+.RE
+.SS Advanced Options
+.PP
+Here are the advanced options specific to swift (Openstack Swift
+(Rackspace Cloud Files, Memset Memstore, OVH)).
+.SS \-\-swift\-chunk\-size
+.PP
+Above this size files will be chunked into a _segments container.
.PP
Above this size files will be chunked into a _segments container.
The default for this is 5GB which is its maximum value.
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 5G
.SS Modified time
.PP
The modified time is stored as metadata on the object as
@@ -13340,6 +17124,31 @@ pCloud supports MD5 and SHA1 type hashes, so you can use the
Deleted files will be moved to the trash.
Your subscription level will determine how long items stay in the trash.
\f[C]rclone\ cleanup\f[] can be used to empty the trash.
+.SS Standard Options
+.PP
+Here are the standard options specific to pcloud (Pcloud).
+.SS \-\-pcloud\-client\-id
+.PP
+Pcloud App Client Id Leave blank normally.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_PCLOUD_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-pcloud\-client\-secret
+.PP
+Pcloud App Client Secret Leave blank normally.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_PCLOUD_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS SFTP
.PP
SFTP is the Secure (or SSH) File Transfer
@@ -13514,34 +17323,6 @@ eval\ `ssh\-agent\ \-k`
.fi
.PP
These commands can be used in scripts of course.
-.SS Specific options
-.PP
-Here are the command line options specific to this remote.
-.SS \-\-sftp\-ask\-password
-.PP
-Ask for the SFTP password if needed when no password has been
-configured.
-.SS \-\-ssh\-path\-override
-.PP
-Override path used by SSH connection.
-Allows checksum calculation when SFTP and SSH paths are different.
-This issue affects among others Synology NAS boxes.
-.PP
-Shared folders can be found in directories representing volumes
-.IP
-.nf
-\f[C]
-rclone\ sync\ /home/local/directory\ remote:/directory\ \-\-ssh\-path\-override\ /volume2/directory
-\f[]
-.fi
-.PP
-Home directory can be found in a shared folder called \f[C]homes\f[]
-.IP
-.nf
-\f[C]
-rclone\ sync\ /home/local/directory\ remote:/home/directory\ \-\-ssh\-path\-override\ /volume1/homes/USER/directory
-\f[]
-.fi
.SS Modified time
.PP
Modified times are stored on the server to 1 second precision.
@@ -13554,6 +17335,173 @@ mod_sftp).
If you are using one of these servers, you can set the option
\f[C]set_modtime\ =\ false\f[] in your RClone backend configuration to
disable this behaviour.
+.SS Standard Options
+.PP
+Here are the standard options specific to sftp (SSH/SFTP Connection).
+.SS \-\-sftp\-host
+.PP
+SSH host to connect to
+.IP \[bu] 2
+Config: host
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_HOST
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"example.com"
+.RS 2
+.IP \[bu] 2
+Connect to example.com
+.RE
+.RE
+.SS \-\-sftp\-user
+.PP
+SSH username, leave blank for current username, ncw
+.IP \[bu] 2
+Config: user
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_USER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-sftp\-port
+.PP
+SSH port, leave blank to use default (22)
+.IP \[bu] 2
+Config: port
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_PORT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-sftp\-pass
+.PP
+SSH password, leave blank to use ssh\-agent.
+.IP \[bu] 2
+Config: pass
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_PASS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-sftp\-key\-file
+.PP
+Path to unencrypted PEM\-encoded private key file, leave blank to use
+ssh\-agent.
+.IP \[bu] 2
+Config: key_file
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_KEY_FILE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-sftp\-use\-insecure\-cipher
+.PP
+Enable the use of the aes128\-cbc cipher.
+This cipher is insecure and may allow plaintext data to be recovered by
+an attacker.
+.IP \[bu] 2
+Config: use_insecure_cipher
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"false"
+.RS 2
+.IP \[bu] 2
+Use default Cipher list.
+.RE
+.IP \[bu] 2
+"true"
+.RS 2
+.IP \[bu] 2
+Enables the use of the aes128\-cbc cipher.
+.RE
+.RE
+.SS \-\-sftp\-disable\-hashcheck
+.PP
+Disable the execution of SSH commands to determine if remote file
+hashing is available.
+Leave blank or set to false to enable hashing (recommended), set to true
+to disable hashing.
+.IP \[bu] 2
+Config: disable_hashcheck
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS Advanced Options
+.PP
+Here are the advanced options specific to sftp (SSH/SFTP Connection).
+.SS \-\-sftp\-ask\-password
+.PP
+Allow asking for SFTP password when needed.
+.IP \[bu] 2
+Config: ask_password
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_ASK_PASSWORD
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-sftp\-path\-override
+.PP
+Override path used by SSH connection.
+.PP
+This allows checksum calculation when SFTP and SSH paths are different.
+This issue affects among others Synology NAS boxes.
+.PP
+Shared folders can be found in directories representing volumes
+.IP
+.nf
+\f[C]
+rclone\ sync\ /home/local/directory\ remote:/directory\ \-\-ssh\-path\-override\ /volume2/directory
+\f[]
+.fi
+.PP
+Home directory can be found in a shared folder called "home"
+.IP
+.nf
+\f[C]
+rclone\ sync\ /home/local/directory\ remote:/home/directory\ \-\-ssh\-path\-override\ /volume1/homes/USER/directory
+\f[]
+.fi
+.IP \[bu] 2
+Config: path_override
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_PATH_OVERRIDE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-sftp\-set\-modtime
+.PP
+Set the modified time on the remote if set.
+.IP \[bu] 2
+Config: set_modtime
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_SET_MODTIME
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
.SS Limitations
.PP
SFTP supports checksums if the same login has shell access and
@@ -13590,6 +17538,192 @@ work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[],
.PP
Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but
\f[C]\-\-contimeout\f[] is).
+.SS Union
+.PP
+The \f[C]union\f[] remote provides a unification similar to UnionFS
+using other remotes.
+.PP
+Paths may be as deep as required or a local path, eg
+\f[C]remote:directory/subdirectory\f[] or
+\f[C]/directory/subdirectory\f[].
+.PP
+During the initial setup with \f[C]rclone\ config\f[] you will specify
+the target remotes as a space separated list.
+The target remotes can either be a local paths or other remotes.
+.PP
+The order of the remotes is important as it defines which remotes take
+precedence over others if there are files with the same name in the same
+logical path.
+The last remote is the topmost remote and replaces files with the same
+name from previous remotes.
+.PP
+Only the last remote is used to write to and delete from, all other
+remotes are read\-only.
+.PP
+Subfolders can be used in target remote.
+Asume a union remote named \f[C]backup\f[] with the remotes
+\f[C]mydrive:private/backup\ mydrive2:/backup\f[].
+Invoking \f[C]rclone\ mkdir\ backup:desktop\f[] is exactly the same as
+invoking \f[C]rclone\ mkdir\ mydrive2:/backup/desktop\f[].
+.PP
+There will be no special handling of paths containing \f[C]\&..\f[]
+segments.
+Invoking \f[C]rclone\ mkdir\ backup:../desktop\f[] is exactly the same
+as invoking \f[C]rclone\ mkdir\ mydrive2:/backup/../desktop\f[].
+.PP
+Here is an example of how to make a union called \f[C]remote\f[] for
+local folders.
+First run:
+.IP
+.nf
+\f[C]
+\ rclone\ config
+\f[]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/s/q>\ n
+name>\ remote
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Alias\ for\ a\ existing\ remote
+\ \ \ \\\ "alias"
+\ 2\ /\ Amazon\ Drive
+\ \ \ \\\ "amazon\ cloud\ drive"
+\ 3\ /\ Amazon\ S3\ Compliant\ Storage\ Providers\ (AWS,\ Ceph,\ Dreamhost,\ IBM\ COS,\ Minio)
+\ \ \ \\\ "s3"
+\ 4\ /\ Backblaze\ B2
+\ \ \ \\\ "b2"
+\ 5\ /\ Box
+\ \ \ \\\ "box"
+\ 6\ /\ Builds\ a\ stackable\ unification\ remote,\ which\ can\ appear\ to\ merge\ the\ contents\ of\ several\ remotes
+\ \ \ \\\ "union"
+\ 7\ /\ Cache\ a\ remote
+\ \ \ \\\ "cache"
+\ 8\ /\ Dropbox
+\ \ \ \\\ "dropbox"
+\ 9\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+10\ /\ FTP\ Connection
+\ \ \ \\\ "ftp"
+11\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ \ \ \\\ "google\ cloud\ storage"
+12\ /\ Google\ Drive
+\ \ \ \\\ "drive"
+13\ /\ Hubic
+\ \ \ \\\ "hubic"
+14\ /\ JottaCloud
+\ \ \ \\\ "jottacloud"
+15\ /\ Local\ Disk
+\ \ \ \\\ "local"
+16\ /\ Mega
+\ \ \ \\\ "mega"
+17\ /\ Microsoft\ Azure\ Blob\ Storage
+\ \ \ \\\ "azureblob"
+18\ /\ Microsoft\ OneDrive
+\ \ \ \\\ "onedrive"
+19\ /\ OpenDrive
+\ \ \ \\\ "opendrive"
+20\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+\ \ \ \\\ "swift"
+21\ /\ Pcloud
+\ \ \ \\\ "pcloud"
+22\ /\ QingCloud\ Object\ Storage
+\ \ \ \\\ "qingstor"
+23\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+24\ /\ Webdav
+\ \ \ \\\ "webdav"
+25\ /\ Yandex\ Disk
+\ \ \ \\\ "yandex"
+26\ /\ http\ Connection
+\ \ \ \\\ "http"
+Storage>\ union
+List\ of\ space\ separated\ remotes.
+Can\ be\ \[aq]remotea:test/dir\ remoteb:\[aq],\ \[aq]"remotea:test/space\ dir"\ remoteb:\[aq],\ etc.
+The\ last\ remote\ is\ used\ to\ write\ to.
+Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ ("").
+remotes>
+Remote\ config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[remote]
+type\ =\ union
+remotes\ =\ C:\\dir1\ C:\\dir2\ C:\\dir3
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y)\ Yes\ this\ is\ OK
+e)\ Edit\ this\ remote
+d)\ Delete\ this\ remote
+y/e/d>\ y
+Current\ remotes:
+
+Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type
+====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ====
+remote\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ union
+
+e)\ Edit\ existing\ remote
+n)\ New\ remote
+d)\ Delete\ remote
+r)\ Rename\ remote
+c)\ Copy\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+e/n/d/r/c/s/q>\ q
+\f[]
+.fi
+.PP
+Once configured you can then use \f[C]rclone\f[] like this,
+.PP
+List directories in top level in \f[C]C:\\dir1\f[], \f[C]C:\\dir2\f[]
+and \f[C]C:\\dir3\f[]
+.IP
+.nf
+\f[C]
+rclone\ lsd\ remote:
+\f[]
+.fi
+.PP
+List all the files in \f[C]C:\\dir1\f[], \f[C]C:\\dir2\f[] and
+\f[C]C:\\dir3\f[]
+.IP
+.nf
+\f[C]
+rclone\ ls\ remote:
+\f[]
+.fi
+.PP
+Copy another local directory to the union directory called source, which
+will be placed into \f[C]C:\\dir3\f[]
+.IP
+.nf
+\f[C]
+rclone\ copy\ C:\\source\ remote:source
+\f[]
+.fi
+.SS Standard Options
+.PP
+Here are the standard options specific to union (A stackable unification
+remote, which can appear to merge the contents of several remotes).
+.SS \-\-union\-remotes
+.PP
+List of space separated remotes.
+Can be \[aq]remotea:test/dir remoteb:\[aq], \[aq]"remotea:test/space
+dir" remoteb:\[aq], etc.
+The last remote is used to write to.
+.IP \[bu] 2
+Config: remotes
+.IP \[bu] 2
+Env Var: RCLONE_UNION_REMOTES
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS WebDAV
.PP
Paths are specified as \f[C]remote:path\f[]
@@ -13706,6 +17840,102 @@ However when used with Owncloud or Nextcloud rclone will support
modified times.
.PP
Hashes are not supported.
+.SS Standard Options
+.PP
+Here are the standard options specific to webdav (Webdav).
+.SS \-\-webdav\-url
+.PP
+URL of http host to connect to
+.IP \[bu] 2
+Config: url
+.IP \[bu] 2
+Env Var: RCLONE_WEBDAV_URL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"https://example.com"
+.RS 2
+.IP \[bu] 2
+Connect to example.com
+.RE
+.RE
+.SS \-\-webdav\-vendor
+.PP
+Name of the Webdav site/service/software you are using
+.IP \[bu] 2
+Config: vendor
+.IP \[bu] 2
+Env Var: RCLONE_WEBDAV_VENDOR
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"nextcloud"
+.RS 2
+.IP \[bu] 2
+Nextcloud
+.RE
+.IP \[bu] 2
+"owncloud"
+.RS 2
+.IP \[bu] 2
+Owncloud
+.RE
+.IP \[bu] 2
+"sharepoint"
+.RS 2
+.IP \[bu] 2
+Sharepoint
+.RE
+.IP \[bu] 2
+"other"
+.RS 2
+.IP \[bu] 2
+Other site/service or software
+.RE
+.RE
+.SS \-\-webdav\-user
+.PP
+User name
+.IP \[bu] 2
+Config: user
+.IP \[bu] 2
+Env Var: RCLONE_WEBDAV_USER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-webdav\-pass
+.PP
+Password.
+.IP \[bu] 2
+Config: pass
+.IP \[bu] 2
+Env Var: RCLONE_WEBDAV_PASS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-webdav\-bearer\-token
+.PP
+Bearer token instead of user/pass (eg a Macaroon)
+.IP \[bu] 2
+Config: bearer_token
+.IP \[bu] 2
+Env Var: RCLONE_WEBDAV_BEARER_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS Provider notes
.PP
See below for notes on specific providers.
@@ -13973,6 +18203,31 @@ If you wish to empty your trash you can use the
\f[C]rclone\ cleanup\ remote:\f[] command which will permanently delete
all your trashed files.
This command does not take any path arguments.
+.SS Standard Options
+.PP
+Here are the standard options specific to yandex (Yandex Disk).
+.SS \-\-yandex\-client\-id
+.PP
+Yandex Client Id Leave blank normally.
+.IP \[bu] 2
+Config: client_id
+.IP \[bu] 2
+Env Var: RCLONE_YANDEX_CLIENT_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.SS \-\-yandex\-client\-secret
+.PP
+Yandex Client Secret Leave blank normally.
+.IP \[bu] 2
+Config: client_secret
+.IP \[bu] 2
+Env Var: RCLONE_YANDEX_CLIENT_SECRET
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
.SS Local Filesystem
.PP
Local paths are specified as normal filesystem paths, eg
@@ -14057,16 +18312,13 @@ And use rclone like this:
This will use UNC paths on \f[C]c:\\src\f[] but not on \f[C]z:\\dst\f[].
Of course this will cause problems if the absolute path length of a file
exceeds 258 characters on z, so only use this option if you have to.
-.SS Specific options
-.PP
-Here are the command line options specific to local storage
-.SS \-\-copy\-links, \-L
+.SS Symlinks / Junction points
.PP
Normally rclone will ignore symlinks or junction points (which behave
like symlinks under Windows).
.PP
-If you supply this flag then rclone will follow the symlink and copy the
-pointed to file or directory.
+If you supply \f[C]\-\-copy\-links\f[] or \f[C]\-L\f[] then rclone will
+follow the symlink and copy the pointed to file or directory.
.PP
This flag applies to all commands.
.PP
@@ -14106,27 +18358,13 @@ $\ rclone\ \-L\ ls\ /tmp/a
\ \ \ \ \ \ \ \ 6\ b/one
\f[]
.fi
-.SS \-\-local\-no\-check\-updated
+.SS Restricting filesystems with \-\-one\-file\-system
.PP
-Don\[aq]t check to see if the files change during upload.
+Normally rclone will recurse through filesystems as mounted.
.PP
-Normally rclone checks the size and modification time of files as they
-are being uploaded and aborts with a message which starts
-\f[C]can\[aq]t\ copy\ \-\ source\ file\ is\ being\ updated\f[] if the
-file changes during upload.
-.PP
-However on some file systems this modification time check may fail (eg
-Glusterfs #2206 (https://github.com/ncw/rclone/issues/2206)) so this
-check can be disabled with this flag.
-.SS \-\-local\-no\-unicode\-normalization
-.PP
-This flag is deprecated now.
-Rclone no longer normalizes unicode file names, but it compares them
-with unicode normalization in the sync routine instead.
-.SS \-\-one\-file\-system, \-x
-.PP
-This tells rclone to stay in the filesystem specified by the root and
-not to recurse into different file systems.
+However if you set \f[C]\-\-one\-file\-system\f[] or \f[C]\-x\f[] this
+tells rclone to stay in the filesystem specified by the root and not to
+recurse into different file systems.
.PP
For example if you have a directory hierarchy like this
.IP
@@ -14169,14 +18407,390 @@ $\ rclone\ \-q\ ls\ root
as being on the same filesystem.
.PP
\f[B]NB\f[] This flag is only available on Unix based systems.
-On systems where it isn\[aq]t supported (eg Windows) it will not appear
-as an valid flag.
+On systems where it isn\[aq]t supported (eg Windows) it will be ignored.
+.SS Standard Options
+.PP
+Here are the standard options specific to local (Local Disk).
+.SS \-\-local\-nounc
+.PP
+Disable UNC (long path names) conversion on Windows
+.IP \[bu] 2
+Config: nounc
+.IP \[bu] 2
+Env Var: RCLONE_LOCAL_NOUNC
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: ""
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+"true"
+.RS 2
+.IP \[bu] 2
+Disables long file names
+.RE
+.RE
+.SS Advanced Options
+.PP
+Here are the advanced options specific to local (Local Disk).
+.SS \-\-copy\-links
+.PP
+Follow symlinks and copy the pointed to item.
+.IP \[bu] 2
+Config: copy_links
+.IP \[bu] 2
+Env Var: RCLONE_LOCAL_COPY_LINKS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS \-\-skip\-links
.PP
+Don\[aq]t warn about skipped symlinks.
This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.
+.IP \[bu] 2
+Config: skip_links
+.IP \[bu] 2
+Env Var: RCLONE_LOCAL_SKIP_LINKS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-local\-no\-unicode\-normalization
+.PP
+Don\[aq]t apply unicode normalization to paths and filenames
+(Deprecated)
+.PP
+This flag is deprecated now.
+Rclone no longer normalizes unicode file names, but it compares them
+with unicode normalization in the sync routine instead.
+.IP \[bu] 2
+Config: no_unicode_normalization
+.IP \[bu] 2
+Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-local\-no\-check\-updated
+.PP
+Don\[aq]t check to see if the files change during upload
+.PP
+Normally rclone checks the size and modification time of files as they
+are being uploaded and aborts with a message which starts "can\[aq]t
+copy \- source file is being updated" if the file changes during upload.
+.PP
+However on some file systems this modification time check may fail (eg
+Glusterfs #2206 (https://github.com/ncw/rclone/issues/2206)) so this
+check can be disabled with this flag.
+.IP \[bu] 2
+Config: no_check_updated
+.IP \[bu] 2
+Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS \-\-one\-file\-system
+.PP
+Don\[aq]t cross filesystem boundaries (unix/macOS only).
+.IP \[bu] 2
+Config: one_file_system
+.IP \[bu] 2
+Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SH Changelog
-.SS v1.42 \- 2018\-09\-01
+.SS v1.44 \- 2018\-10\-15
+.IP \[bu] 2
+New commands
+.RS 2
+.IP \[bu] 2
+serve ftp: Add ftp server (Antoine GIRARD)
+.IP \[bu] 2
+settier: perform storage tier changes on supported remotes (sandeepkru)
+.RE
+.IP \[bu] 2
+New Features
+.RS 2
+.IP \[bu] 2
+Reworked command line help
+.RS 2
+.IP \[bu] 2
+Make default help less verbose (Nick Craig\-Wood)
+.IP \[bu] 2
+Split flags up into global and backend flags (Nick Craig\-Wood)
+.IP \[bu] 2
+Implement specialised help for flags and backends (Nick Craig\-Wood)
+.IP \[bu] 2
+Show URL of backend help page when starting config (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+stats: Long names now split in center (Joanna Marek)
+.IP \[bu] 2
+Add \-\-log\-format flag for more control over log output (dcpu)
+.IP \[bu] 2
+rc: Add support for OPTIONS and basic CORS (frenos)
+.IP \[bu] 2
+stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+Fix \-P not ending with a new line (Nick Craig\-Wood)
+.IP \[bu] 2
+config: don\[aq]t create default config dir when user supplies
+\-\-config (albertony)
+.IP \[bu] 2
+Don\[aq]t print non\-ASCII characters with \-\-progress on windows (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Correct logs for excluded items (ssaqua)
+.RE
+.IP \[bu] 2
+Mount
+.RS 2
+.IP \[bu] 2
+Remove EXPERIMENTAL tags (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Fix race condition detected by serve ftp tests (Nick Craig\-Wood)
+.IP \[bu] 2
+Add vfs/poll\-interval rc command (Fabian Möller)
+.IP \[bu] 2
+Enable rename for nearly all remotes using server side Move or Copy
+(Nick Craig\-Wood)
+.IP \[bu] 2
+Reduce directory cache cleared by poll\-interval (Fabian Möller)
+.IP \[bu] 2
+Remove EXPERIMENTAL tags (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Local
+.RS 2
+.IP \[bu] 2
+Skip bad symlinks in dir listing with \-L enabled (Cédric Connes)
+.IP \[bu] 2
+Preallocate files on Windows to reduce fragmentation (Nick Craig\-Wood)
+.IP \[bu] 2
+Preallocate files on linux with fallocate(2) (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Cache
+.RS 2
+.IP \[bu] 2
+Add cache/fetch rc function (Fabian Möller)
+.IP \[bu] 2
+Fix worker scale down (Fabian Möller)
+.IP \[bu] 2
+Improve performance by not sending info requests for cached chunks
+(dcpu)
+.IP \[bu] 2
+Fix error return value of cache/fetch rc method (Fabian Möller)
+.IP \[bu] 2
+Documentation fix for cache\-chunk\-total\-size (Anagh Kumar Baranwal)
+.IP \[bu] 2
+Preserve leading / in wrapped remote path (Fabian Möller)
+.IP \[bu] 2
+Add plex_insecure option to skip certificate validation (Fabian Möller)
+.IP \[bu] 2
+Remove entries that no longer exist in the source (dcpu)
+.RE
+.IP \[bu] 2
+Crypt
+.RS 2
+.IP \[bu] 2
+Preserve leading / in wrapped remote path (Fabian Möller)
+.RE
+.IP \[bu] 2
+Alias
+.RS 2
+.IP \[bu] 2
+Fix handling of Windows network paths (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Add \-\-azureblob\-list\-chunk parameter (Santiago Rodríguez)
+.IP \[bu] 2
+Implemented settier command support on azureblob remote.
+(sandeepkru)
+.IP \[bu] 2
+Work around SDK bug which causes errors for chunk\-sized files (Nick
+Craig\-Wood)
+.RE
+.IP \[bu] 2
+Box
+.RS 2
+.IP \[bu] 2
+Implement link sharing.
+(Sebastian Bünger)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Add \-\-drive\-import\-formats \- google docs can now be imported
+(Fabian Möller)
+.RS 2
+.IP \[bu] 2
+Rewrite mime type and extension handling (Fabian Möller)
+.IP \[bu] 2
+Add document links (Fabian Möller)
+.IP \[bu] 2
+Add support for multipart document extensions (Fabian Möller)
+.IP \[bu] 2
+Add support for apps\-script to json export (Fabian Möller)
+.IP \[bu] 2
+Fix escaped chars in documents during list (Fabian Möller)
+.RE
+.IP \[bu] 2
+Add \-\-drive\-v2\-download\-min\-size a workaround for slow downloads
+(Fabian Möller)
+.IP \[bu] 2
+Improve directory notifications in ChangeNotify (Fabian Möller)
+.IP \[bu] 2
+When listing team drives in config, continue on failure (Nick
+Craig\-Wood)
+.RE
+.IP \[bu] 2
+FTP
+.RS 2
+.IP \[bu] 2
+Add a small pause after failed upload before deleting file (Nick
+Craig\-Wood)
+.RE
+.IP \[bu] 2
+Google Cloud Storage
+.RS 2
+.IP \[bu] 2
+Fix service_account_file being ignored (Fabian Möller)
+.RE
+.IP \[bu] 2
+Jottacloud
+.RS 2
+.IP \[bu] 2
+Minor improvement in quota info (omit if unlimited) (albertony)
+.IP \[bu] 2
+Add \-\-fast\-list support (albertony)
+.IP \[bu] 2
+Add permanent delete support: \-\-jottacloud\-hard\-delete (albertony)
+.IP \[bu] 2
+Add link sharing support (albertony)
+.IP \[bu] 2
+Fix handling of reserved characters.
+(Sebastian Bünger)
+.IP \[bu] 2
+Fix socket leak on Object.Remove (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Rework to support Microsoft Graph (Cnly)
+.RS 2
+.IP \[bu] 2
+\f[B]NB\f[] this will require re\-authenticating the remote
+.RE
+.IP \[bu] 2
+Removed upload cutoff and always do session uploads (Oliver Heyme)
+.IP \[bu] 2
+Use single\-part upload for empty files (Cnly)
+.IP \[bu] 2
+Fix new fields not saved when editing old config (Alex Chen)
+.IP \[bu] 2
+Fix sometimes special chars in filenames not replaced (Alex Chen)
+.IP \[bu] 2
+Ignore OneNote files by default (Alex Chen)
+.IP \[bu] 2
+Add link sharing support (jackyzy823)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Use custom pacer, to retry operations when reasonable (Craig Miskell)
+.IP \[bu] 2
+Use configured server\-side\-encryption and storace class options when
+calling CopyObject() (Paul Kohout)
+.IP \[bu] 2
+Make \-\-s3\-v2\-auth flag (Nick Craig\-Wood)
+.IP \[bu] 2
+Fix v2 auth on files with spaces (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+Union
+.RS 2
+.IP \[bu] 2
+Implement union backend which reads from multiple backends (Felix
+Brucker)
+.IP \[bu] 2
+Implement optional interfaces (Move, DirMove, Copy etc) (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Fix ChangeNotify to support multiple remotes (Fabian Möller)
+.IP \[bu] 2
+Fix \-\-backup\-dir on union backend (Nick Craig\-Wood)
+.RE
+.IP \[bu] 2
+WebDAV
+.RS 2
+.IP \[bu] 2
+Add another time format (Nick Craig\-Wood)
+.IP \[bu] 2
+Add a small pause after failed upload before deleting file (Nick
+Craig\-Wood)
+.IP \[bu] 2
+Add workaround for missing mtime (buergi)
+.IP \[bu] 2
+Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
+.RE
+.IP \[bu] 2
+Yandex
+.RS 2
+.IP \[bu] 2
+Remove redundant nil checks (teresy)
+.RE
+.SS v1.43.1 \- 2018\-09\-07
+.PP
+Point release to fix hubic and azureblob backends.
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
+.IP \[bu] 2
+cmd: Fix crash with \-\-progress and \-\-stats 0 (Nick Craig\-Wood)
+.IP \[bu] 2
+docs: Tidy website display (Anagh Kumar Baranwal)
+.RE
+.IP \[bu] 2
+Azure Blob:
+.RS 2
+.IP \[bu] 2
+Fix multi\-part uploads.
+(sandeepkru)
+.RE
+.IP \[bu] 2
+Hubic
+.RS 2
+.IP \[bu] 2
+Fix uploads (Nick Craig\-Wood)
+.IP \[bu] 2
+Retry auth fetching if it fails to make hubic more reliable (Nick
+Craig\-Wood)
+.RE
+.SS v1.43 \- 2018\-09\-01
.IP \[bu] 2
New backends
.RS 2
@@ -18051,6 +22665,7 @@ Onno Zweers
Jasper Lievisse Adriaanse
.IP \[bu] 2
sandeepkru
+
.IP \[bu] 2
HerrH
.IP \[bu] 2
@@ -18079,6 +22694,51 @@ Alex Chen
Denis
.IP \[bu] 2
bsteiss <35940619+bsteiss@users.noreply.github.com>
+.IP \[bu] 2
+Cédric Connes
+.IP \[bu] 2
+Dr.
+Tobias Quathamer
+.IP \[bu] 2
+dcpu <42736967+dcpu@users.noreply.github.com>
+.IP \[bu] 2
+Sheldon Rupp
+.IP \[bu] 2
+albertony <12441419+albertony@users.noreply.github.com>
+.IP \[bu] 2
+cron410
+.IP \[bu] 2
+Anagh Kumar Baranwal
+.IP \[bu] 2
+Felix Brucker
+.IP \[bu] 2
+Santiago Rodríguez
+.IP \[bu] 2
+Craig Miskell
+.IP \[bu] 2
+Antoine GIRARD
+.IP \[bu] 2
+Joanna Marek
+.IP \[bu] 2
+frenos
+.IP \[bu] 2
+ssaqua
+.IP \[bu] 2
+xnaas
+.IP \[bu] 2
+Frantisek Fuka
+.IP \[bu] 2
+Paul Kohout
+.IP \[bu] 2
+dcpu <43330287+dcpu@users.noreply.github.com>
+.IP \[bu] 2
+jackyzy823
+.IP \[bu] 2
+David Haguenauer
+.IP \[bu] 2
+teresy
+.IP \[bu] 2
+buergi
.SH Contact the rclone project
.SS Forum
.PP